 All right, everybody. Well, welcome. I'm trying to check out who's here and who's joined us. And some of them are familiar names and some of them aren't. And I may be a familiar person to some of you, but I'm also the director of community development here at Red Hat and I'm really thrilled to have Christian Glombeck with me today and hopefully Vadim and Charo may chime in and join as well. And what we thought we would kick off today's meetup was a little state of OKD4. We've been doing a lot of work over the past year getting OKD, getting people to migrate from OKD3 to OKD4. It's got a new architecture. There's a lot going on under the hood and the pretty vibrant community already around OKD, but we're always looking for more people. Christian is here with us. I'm going to ask him to give this talk and then we will have Q&A and conversations afterwards. So Christian, take it away. All right. Thank you, Diane. Hello everybody. I'm Christian Glombeck. I am an engineer on the OpenShift organization. And I work on the OpenShift, on the OKD working group as well with Diane and Vadim and Charo, who will hopefully join us later. All right. So what is OKD? OKD is a community distribution of Kubernetes, first of all. It is the OpenShift flavor, the OpenShift code base, on top of Fedora CoroS. And Fedora CoroS, the operating system here, is actually included in OKD. It's become an implementation detail in OpenShift. So let's have a quick look at the agenda. We'll have a quick overview of OKD4. Then we'll quickly talk about the state of OKD4. And we might have a demo later if Charo shows up. OK. So OKD4, a community distribution of Kubernetes plus plus. And as you may have already heard or know about, OpenShift is a distribution of Kubernetes, but we add a lot of extra niceties to it. And OKD is the community version of that. We have automated installation, patching, and updates from the operating system all the way up to the cluster, one life cycle, four cluster, and operating system. And it would normally look like this. As you can see here, you have any platform provider at the bottom. And then on top of that, you'll be installing Fedora CoroS. That'll be actually part of the OKD installers. It'll all be done automatically. It'll install Linux hosts, a cluster of Linux hosts. And then it'll install the actual cluster software, the Kubernetes distribution that OpenShift is on top of that. And this way we provide a great platform for running your applications on top of all that. So it's a great OKD and OpenShift are great for both hosting and also for developing. We actually have a lot of development tools here that make it easy for developing teams or individual developers to get started right away with working and implementing services for that run on OpenShift. Here's a quick look at the platforms that we support. So we support bare metal over virtualization, OpenStack, AWS, Azure, Google Cloud, vSphere. So there's literally all of them. And we're constantly adding more to them. We support all the platforms that are supported by the OpenShift product as well. So let's talk about the state of OKD today and tomorrow. So we're currently at stable release 4.6, which is the same that OCP, the OpenShift compute platform product of Red Hat is at as well. We are obviously aiming for community contributions and we've been actually able to get feedback from the community already and sold a few bugs that hadn't been reported by our customers. And so that's really great to see that we can really include the community here in our product development lifecycle. So yeah, the entirety of OKD is really a collaboration between different projects. So we have the Fedora community that provides us with the great Fedora CoreOS operating system. And then we have the operator hub on the kind of on the other end of that, which provides services that you can actually deploy on your cluster. And in the middle of this is OKD, which provides, which bundles all together and provides the platform. Yeah, we have bespoke operators for OKD. So a few of them are specifically made for OpenShift and for OKD. We don't test a few of them on upstream Kubernetes, but for example, the OpenShift pipelines operator is specifically made for OpenShift. You can find these operators on the operator hub community catalog. And they are available when you install a new instance of an OKD cluster. You'll have the operator hub catalog on the side and it's really just a click for the cluster admin to install an operator that will provide you with new features for the entire cluster. Yeah, what we thrive to do in OKD is enable early adoption of upcoming technologies. So we really want to be the first to be able to use a new feature in both upstream Kubernetes and also the OpenShift code base. So release 4.7 is coming soon. We have rolling releases. So we release about every two weeks. And we're currently on 4.6 points. I'm going to talk 7 maybe. And so the next one or the one after that will be the first 4.7 release. So it's coming soon. All right. And now I don't have the demo. I'm not sure if Charo is here. If he is, I would like to hand it over to him. I think Charo is out of action due to time zone issues. So what I just really wanted to talk about here a little bit is some of you may be aware of the code ready containers offerings that we have for OpenShift. And this helps us bring a single node OKD for cluster to you to your laptop or your workstation for you to easily test it out and check out OKD and have, you know, take it for a test run. And we do have that. If you go to OKD.io, there are links all there. And there's Vadim. And Vadim is in now. So if we can make him a speaker too, that would be great, Jen. And that, yeah, so I'm just going to pause there for Vadim's comment. And while we wait for Vadim, yeah, but seriously, I think you've explained it already. We have the CRC version for OKD as well as for OCP. So if you want to test something on your laptop without any kind of contract that you need to enter, then the OKD version of CRC is the way to go. And yeah, it's based on the code ready containers and includes Fedora Corvus as well. It's actually a virtualized little cluster that you run on the local machine. In the future, we will also be able to run single node clusters just locally. It's a virtualization, which will make it even faster. But if you want to really get a full blown cluster for development purposes or just evaluation or demos, CRC is awesome. So it looks like Vadim is still having trouble to join. And he just corrected me in the chat. We don't have Zstream versions for OKD. So it's a rolling release. We actually put a date on it. I'm not sure which one it is, but we've had a few 4.6 releases, and one of the coming ones will be the first 4.7 release. So yeah, and this is the call out. Do please do join us in the working group if you're interested. It's the place where you can just come and say hello or request new features, which is a thing our community like to do, which is obviously great. So yeah, we have a Slack channel, which is OpenShift Dev on the official Kubernetes Slack. We also have literally all the channels on the OpenShift Commons Slack. You'll be dropped in the general chat when you join this. Yeah, that's another place you can ask. Then we have a Google group, which we also use as our email list. And it's the OKD-WG working Google group. If you want to receive invitations to our meetings, to our working group meetings, please do join that group as well. And then we have two repositories for OKD in the OpenShift organization on GitHub. First of all, the community repository, which is for things like this, planning community outreach and all the things that really have to do with the community and outreach. And then we have the OKD repository, which is our technical issue tracker. So if you have an issue, please do file an issue on OKD. And if you want to join the working group, you can find all the information in the community repository as well. And I see Vadim is here now. Good to see you. Oh, sorry. Joining took quite some time. Just to add some more things, what Christian has said, the whole goal of the OKD project is to have all the features that OCP has and how customers can enjoy using support, but also include community into decisions into what they can support. And I've heard all of the parts which OKD consists not just only open source, they are also available for community to play with, modify some parts, release their own payloads and interact with us in a more active fashion than a closed loop of being a customer of OCP. Well said. I think the other group that we work pretty tightly with is the Fedora CoroS group. So because we are, that is one of our dependencies at the moment. So we also have a lot of insights into how Fedora CoroS is going and each of those releases and the impact it has on OKD. So there's a lot of collaboration between the two communities, which is quite nice as well. There is one question in the chat. I don't know, Jen, if you can turn on people's faces and we can have Q&A now. That would be great. I don't think Charo is coming to do a demo. So let's see if we can get Eric on and his question was basically around the single node. Maybe I can take this. Currently, the way we do single node in the container ready deployment in the CRC deployment is a bit because it's virtualized. It is a bit different than what we aim to do in the long term for single node clusters. So we want to have single node clusters that run natively on the host and without virtualization. That is currently in progress, that effort, and we will, I think there will be something in 4.7 that won't support upgrades yet and then in 4.8 or later it will be supporting upgrades as well. So at that point we might reconsider changing the default for OKD, but right now because the OpenShift installer that we used to install usual OKD clusters and the way CRC is installed isn't really the same. We can't just flip the switch here because the installer doesn't support installing single node clusters yet. Once it does, we might consider actually changing that. Although it is obviously a big feature, a big part of OKD to be highly available to deliver that kind of certainty to any user. So we'll have to discuss it at that point. I think it does make sense also to have it a highly, highly available deployment by default in my opinion, but that is certainly up for discussion with the community at that point. And so I'm not seeing too many other questions in the chat here, so I think that one of the things that we're really been focusing on as a working group and you can stop sharing slides and we just go to Talking Heads now is getting more community involvement in testing on different deployment configurations. So that's been one of the hot topics in our working group meetings, which are bi-weekly. So they're bi-weekly, the main working group meetings and the documentation ones are on the opposite weeks. So we're really trying to push through getting better testing pipelines done and supported by the community. I'm curious if anybody here who is in the session is currently using OKD yet, because there's a lot of names that I don't recognize from the working group. So we would love to have you and your feedback in that group if you're using OKD or thinking about it. Yeah, and there's the link to the calendar can be found right there in the chat. So that would be great if you could join us and give us your feedback. Are there any other questions from people out there? No, that was a quick run through. Eric, who's not on audio and video has a second question. As a personal outside the box challenge, I currently try to participate in open source projects without providing code. So maybe you can discuss a little what things are that you can do in your community that could use two more hands or another brain but doesn't require pushing code. Well, now that you've said that on the other weeks, testing is great. And that's what we why I bring it up here because it's a wonderful thing. And we are going to schedule soon sort of a hackathon teach people how to set up testing pipelines for OKD in the coming month. So if you join the working group, Google group, you'll get an announcement of that documentation. We're currently redoing the OKD.io site and updating the documentation. So there's on the opposite weeks. Also in the Fedora project calendar is a documentation session. So next Tuesday at 1700 UTC, which is I'm on West Coast time. So that's 9am for me noon on East Coast. We will have a session. And there's a lot of holes in our documentation. And we are trying to get them up to stuff and keep them up to date. So there's very, very many minor small tasks, grammatical errors, things that you can help us with. And then one of our key goals is we have pretty good technical documentation from my point of view, but not great getting started documentation. So we're trying to kind of start at a higher level probably than we should. And so we're trying to bring that back down so that people will get get it make it easier for people to start with OKD, whether it's with CRC or just deploying it themselves. Then there's another question. I also wanted to add for the starting place so to help the community. Quite a lot of chunk in our jobs is usually revolving around installation and upgrades. So we tried to focus on that and we're covering it. Well, pretty well let's say the problem comes is how do people use their clusters. What are their usual workflows, which problems do they hit during the whole life cycle. Are they running some data mining? Are they running just some engine spots and so on. And learning how people actually use OKD for their purposes would be a great insight into what we should do better and what we should focus on. Currently, I work in the upgrades team so my whole world revolves around upgrades, but focusing on some actual workflows and how to improve the experience during their daily usage of the cluster would be immensely important. I see there is another question in the chat in the new community. Just wait. Hello, what is the situation with the building of the Reddit operators is the statement regarding usage still valid. So you should be able to run all the operators even from the from the subscription catalog for the red product. You should be able to run them on OKD and vice versa. So if you have an OCP cluster, you should be able to run the OKD operators on it. Obviously, that is not very well tested and we don't build all of the operators that we have a subscription product for currently for the community product, but we want to change that so that we really get the entire palette of operators available to OKD as well. It should be like if it's if it's in one of those catalogs, you should be able to install it and it should. It should run if it doesn't that is not expected and please file an issue in that case. Roberto is asking is saying he's new to the community and he did join us last week or this earlier this week. I think it was on a Tuesday, probably the working group meeting. I guess we didn't introduce you. So I apologize for that. And he's asking how I can get an assignment from the OKD backlog. That sounds like somebody you want to know. We do have a pretty long at the moment. Issues list, but I think most of them are a little more. They're not low hanging fruit, I would say. Would you agree with that statement, Betty? So we're working on getting a tagging system for the issues so that we can add in some low hanging fruit for people to do. But depending on Roberto, what your expertise is, there's a lot of things to experiment with. There's some mixed architecture and multi architecture work going on with OKD that's just kicking off. So it really depends on what you are interested in what sort of workloads and where you're deploying. I did mention we're going to try and host using Hopin, a hackathon on testing and setting up testing pipelines for the different variations. So that might be a place to jump in as well. So again, I point everybody to join the Google group for announcements and emails about that. So Vadim, are there any issues in the issues list that are low hanging fruit at all, like things that you just need people to test maybe? Testing the latest nightlies is a definitely great thing, especially with 4.7 stream upcoming. We would definitely need some help on that. It may sound boring, but this is incredibly important because we have to, at the same time, not break other people's stable releases, but also prepare a new stable from the upcoming 4.7. And the lack of time is a serious one. We also have a contributing guide that definitely needs some love because it's very generic. We cannot think of the ways how community can contribute right now. It's up to them which parts to change. So we're interested to find out what are the most common cases where people would like to change OKD payload. For instance, setting up a new branding or replacing Prometheus or something like that. So that's not exactly low hanging fruit because it requires some experience, but that's definitely a good starting point to expand our documentation. Just to add to that, our OKD issue tracker really is just a collection of issues out of the entire realm of OpenShift. So what we really, most of the times, then do is triage that and hand it over to the team that would actually own that component where the issue occurred. So it is not always super easy to jump into code work, code contributions here. Because the first step is oftentimes to actually find the component this issue actually sits in and actually find the team that knows what's happening. Because we do what OKD essentially is a collection, a bundle of a lot of softwares and we put it together and make it into a system that works. But if something breaks, it's mostly in one of those components and it might not be super easy to see. So, yeah, if you do have some knowledge, you can also help with back triage in our OKD issue tracker. So just go through that to see if there's something actionable for you in there, something you might have some domain knowledge that would be absolutely helpful as well. Perfect. So, Roberto, don't worry, I've got your name and I'll find your number and we'll hopefully get you on boarded and take advantage of your enthusiasm. And if you've tried OpenShift recently, please do give one of the nightly releases of OKD a try. It's a slightly different experience because you are with Fedora Core OS. So, you know, there are some things that are definitely different and there's documentation on that in the docs site docs.okd.io. And, you know, we'd love to have your feedback and as I said, and I'll keep repeating myself. Next Tuesday, there is the docs meeting. So, and our focus next week is on two parts, the contributors guide work and on getting the documentation on what it takes to set up a testing pipeline. So, Jamie from McGeara, from University of Michigan is going to help us get that documentation done. So, and then testing the testing documentation. There's a lot of work to be done. So it's an ongoing and it's pretty often we do releases. So, you know, setting up these pipelines is really one of our key focuses right now and getting the community to help and give feedback on that. That said, in the issues list on in the GitHub repo, if there's an open issue that you'd like to test and, you know, see if you can replicate, that's always a good thing too. So, sometimes it's really hard to replicate people's issues, since we're not all on the same testing pipelines. So that's been, that's been one of the things that we've hit hard recently. So, that would be a great place to start. So I'm looking to see if there are any other questions from people on the call. I just call out if anybody here is using OKD already, we'd love to have you join the Google group and give us your feedback or join one of the working group meetings coming soon. Christian, if you want to share that resources slide one more time. That's okay. All right, Roberto, we look forward to seeing you. All right, here's a premature enter happened. Miroslav has got a question. What do you wanted to ask is whether one is really legally allowed to run OKD with Red Hat build operators and production. Obviously without OCP subscription because there are no restrictions meant to mentioned on GitHub. I think I can answer that. I'm going to make a lot of legal people angry, but nevertheless, in OKD world, we do not put any restrictions on you. What we do is we allow you to install without a pulse secret, unlike OCP, but if you use that pulse secret to pull in operators from Red Hat catalog. The restrictions put on you by signing in on Cloud Red Hat still apply. These restrictions apply, meaning after 60 days, you have to change a subscription from eval to standard or any other or remove the operators. But since it being an OKD, meaning not a Red Hat product which we support, you don't get support when you're installing the separators, but you can still use them. This is why all those restrictions are not mentioned on GitHub because we depend on whatever end user license agreement is set on cloud.redhat.com and we don't feel like changing it every single time. This agreement changes. So yeah, that's basically the short answer. You get all the benefits without some of the benefits and all the restrictions obviously still apply. If you don't use the pulse secret and you don't pull in any operators from the subscription catalog, only the community or upstream operators, then you are free to use those in production as long as you want as well. If you don't use any of the subscription stuff, then you can use them as long and without any limitations. Did that answer your question, Miro? Here comes another one. Is it possible to install ACM operator and OKD? I don't think anybody has built a community variant of that or tested it. It should be technically possible. You should be able to rebuild the ACM operator yourself and run it, but we don't currently do the builds, I think. What important point here is that OKD doesn't contain all of those operators themselves. It only has a reference to catalog sources. So you would need to reach out to operator framework folks who are building operators in those sources. We can add additional sources in OKD if these are deemed to be useful. But we cannot control what these sources contain. These are slightly different team and slightly different project at all. However, we're glad to route all the experience because OKD folks are being on the receiving end of that to the operator folks, but it's just not that easy. I think it's important to note that the OKD working group specifically concerns itself with the core system. That is the core operators that are part of the payload and the operating system itself, which is also part of the payload. Then we have these operators from Operator Hub, which are optional operators. They're non-core operators. They can be installed on top of the cluster and those are part of OKD. ACM is one example here, the OpenShift Pipelines operator or Tecton operator. There is a whole lot of operators that are optional to OpenShift and they are not included in the payload. Obviously, we as a working group want to enable the usage of that and want to facilitate that, but we are not actually the same group that would build the operators. We can build the release payload for the core system. And this goes back to OKD and OpenShift is a collaboration between many projects from Fedora Core OS to the operator group to ACM across Red Hat and outside of Red Hat. So it's a lot of cross community collaboration to get these releases out and to then trace back when there are bugs. So it does take a lot of communication and Vadim and Christian have been amazing in helping to facilitate that over the past year to get us to where we're at right now. So it's been an adventure to say the least. And we really appreciate all of the community insights and feedback. And we're hoping that this next push around testing and documentation will make it even easier for everybody to join and give us feedback and help us trace down some of those bugs and make sure that all the operators you need are there as well. I think we've heard all these bugs a lot here. It's mostly really installation or upgrade issues. So I think once you have the platform running, it is very stable. It is just the initial installation that we still have some issues with on some platforms sometimes. And then upgrades. I think we need to do a little bit better in terms of testing there, but it should all like you should be able to recover even an upgrade that fails by rolling back usually. And so yeah, if you get if the installation actually succeeds, which it usually does, but there's I think most of our issues we get is for installation failures. Sometimes you just have to try again because your infrastructure was too slow or something that there's just a whole plethora of possible causes there, which is why those issues are still open. But yeah, I think once you actually have the system running, it is a very stable system. I would like to add that when you compare that to Kubernetes and plain installations, the OCD has a very important difference. It's based on RPMO history and on operators pattern, meaning all of the system state is encoded on file system that is taken care of by RPMO history and in that city effectively. As long as you make at city backups, you should be able to restore them from scratch and operators will make sure that your cluster state is being restored. And the file system state is being taken care of by RPMO history. You would be able to put into the previous deployment, which has the right files. So breaking a cluster is well trivial, but restoring so is you would just put into the previous file system and restore it city from the backups. And there is no hidden parameters file, no tail state, no Ansible scripts, which are external to the cluster. Everything information about the clusters encoded in itself. So it should be trivial to restore it. One of and I apologize, Christian, if I keep making it sound like there's tons of bugs in OCD, there aren't, as you say, it's the install and the upgrades that are really the thing because we really do get the benefit of the entire OpenShift engineering team drilling down on all of this for customers. And in some ways, rather than an upstream project, we kind of refer to OCD as a sibling stream project. So as things get worked out in OpenShift itself, we get the benefit of that in OCD as well. So there's, there's this good points and bad points of the siblings stream process, but it has really benefited once you do get this deployed, we get the benefit of a lot of engineering effort. So it's, it's just, and I think that's probably why I keep focusing back on the testing and making sure because testing and getting it deployed once you've got that done. And we've got all the recipes for that. We're pretty much gold.