 I'll share the screen again. So welcome everybody. This is the official last meeting that I have of 2020. Thank the Lord, it's over. And the last working group meeting. So thank you for coming and joining us. I don't have a huge agenda today. I'm just sharing that. A few people have wandered in over the past little while. So I wanted to make sure we had this meeting in case they showed up. I was a gentleman from IBM, Michael Turrick, who was interested in PPC64LE support for OKD. And he came in the other day and asked a question about that. And there was also a person from the Ansible team who asked for some time, but I doubt they're gonna show up today either. So I'm just gonna groups. That was Timothy Ackville, was a red-hatter. And he had a project on the Ansible content collection for OKD, which he asked us all to take a look at. And I'll put the link to that in here. I had not heard about it or tested it. And I'm not sure anybody here had either because it seems to have not made it. Stop sharing again and throw it into the chat. If people wanna take a look at it over the holidays, he'd appreciate that if he's not here. The IBMmer, I can't talk to. Hey, Joseph. Hi. There you go. And some reason, James, you're in here twice, but I believe that. So that's what, those are the two things that came in. I think the current release, Christian, if you have any updates on that, any feedback on where we're at? So yeah, we're preparing the next 4.6 Zstream release. It's been quite, 4.6 has introduced quite a few, we've had some follow-up with 4.6 because the switch over to Fedora 33 packages came rather late in the cycle. So yeah, we hit a few issues, especially with regards to networking, DNS, because the underlying OS Fedora in 33 had some major changes, how they, yeah, for example, they use Resolve D and we had to revert that change for now to make it work. Everything is being worked on though. I can paste a link to the current Brack King bug for the next stable release. So, and in the future, we will actually streamline this. Just yesterday, a raw height Fedora CoreOS Stream was introduced, so we'll have that testing much earlier in the future on the future package set of Fedora and hopefully not run into issues as big as at this time around. Although I have to say my open shift clusters are running just fine, so I'm not sure what you guys are doing. Yeah, I was just gonna say, I think for those who don't hit those problem because they only affect a subset of that forms. I think for those of you, everything should continue to work fine. We do have one issue with our image mirroring at the moment. There was some wreckage in Portman that currently the images that we push to Quay don't have the correct manifest version and those can't be mirrored, I think, with the OC command into your local ring. So those with an offline installation set up probably won't be able to install 4.6 at this time. There are, yeah, that has been tracked in Portman and I think a fix is in the works. We'll just have to wait until that lands in Fedora land. I did do a mirrored install of not the 1127 release what was it, 12.12? And it worked. Yeah, I think it's mostly an issue with the images that we mirror from prow out to Quay as an official release. So all the internal builds on our internal prow cluster, they should work. It's just the step that actually pushes them out to Quay that introduces those changes in the manifests. Apparently, if you pull it from Quay, you might hit those problems. Yeah, it's still a problem for us. Bruce or Joseph, did you try it with the 12.12 release? Even the 1127 release, I had to do a force upgrade of it, but I'm pretty sure. Which release are you talking about? Just a minute, I'm pulling up the 2020-12-12. Yeah, that's when it took my system down. Great took it down? Yeah. Let's get mirroring errors. It's not working for us. Right, I wasn't trying to do it from an internal repository, a mirrored one. No, it was very strange. It worked perfectly until the very end when it upgrades the nodes. And then there was one node that got an OBS configuration failure for some reason because it was started up and it started up fine. But then when it rebooted it, it wasn't there anymore. And so that node failed and then there was sort of a cascading things my RookSeph failed. And then that took down the registry, the image registry. And for some reason, even though there are, like RookSeph does have two nodes that it's on, the OBS things were in a totally dead, so my Rook cluster is down. So I got the image registry back by going with a zero image registry, basically. But of course all of the, I've got hundreds of pods that are in, they can't get their images to restart, et cetera, et cetera. So I think it was all somehow, and I've got something on the 395 report on that, but it does seem slightly different from the other failures. And I've been doing archeology through logs and things and nothing quite, I don't quite see why it didn't do that. What hypervisor are you running on? I'm running on, I don't know. It's wherever VMware is using, vSphere. So if I understand you right, then we can do nothing about this mirroring problem, but as soon as you have the proper version of Podman in your proud system, the things will be fixed automatically, is this correct? Yeah, I think it's two things here. The mirror step first, and then you also use, I think, Podman within, or at least this container's image library within OC when you actually mirror that over within the client. So yeah, as far as I can tell, the fix we're hoping that we're hoping will resolve is the one I just linked and that you also linked, Jota. Yeah, I'm not sure how this came to be and how this could be missed. Obviously, it's very unfortunate. I think RHEL and RHEL CoroS use an older version of Podman, which is why they're not affected. So yes, until this issue is resolved, we will probably stay broken unfortunately in regards to that manifest issue. But this means also that we have to update OC, the OC client to get it working. You should always update your client binary when installing of migrating to a new version. Okay, that's good to know. In general, the rule of thumb is it is okay to have your client binaries on the bleeding version because it is generally backwards compatible, not forwards. Exactly, and that's actually pretty strict rule. So we're, yeah, we're definitely looking at being backwards compatible. Forward compatibility obviously isn't, we can't promise that, but we're pretty good at, for example, if you have the 4.6 client, you can build a 4.4 cluster with it. That should work and that is actually supported, I think, in the open-shift product. So it'll also work with OKD. Okay, that's good to know because I'm not sure if you always update OC client with new versions. Yeah, you generally should do that because things get kind of wonky when you don't. Especially with the introduction of new APIs, CRDs, that'll, you know, they will just not be supported in the newer client. Obviously that's mostly a problem when you have minor version bumps, not patch releases, but still, it's definitely best practice to always keep up to date. And it's safer to just be up to date and be at the leading edge with the client because it's always backwards compatible. Okay, thank you. That reminds me of something though. The open-shift client package in Fedora is still on 3.x. Actually, I think it's still on 3.10 even. It's not even on the last of the 3x version. We should probably take that to Fedora DevL and deprecate the package retainers. Let's get that package out of there. I mean, I would suggest we should probably just have it upgraded to the latest versions because that makes it easier for people to go between Fcause, OCD, and Fedora and stuff. And it's not like there's API promises from a binary program that you just, you can't like build a pen on the open-shift clients and like embed some of its code into something else. That's not a thing that that package lets you do. So... Yeah, I'm not, obviously, ideally, we, that I'm not sure it's worth the effort of maintaining that spec file. And I'm not even sure. I do think we have some RPM builds for OC in CI somewhere for testing, but nothing really consumes that RPM right now. We used to consume it from the RPM, but now it's just really being passed around just the build binary. So, Neil, how do we get that deprecated or where is that out in Fedora, Lam? What's the process? Well, I mean, you'd have to talk to the maintainer of the open-shift clients package, which I think that is... Given the stage of the package, they probably don't even know they're the maintainer anymore. Yeah. Let's see. Oh, I mean, isn't here as well? Let's see, the origin package is owned by Justin Kaha, who is a right-hand employee. Still? Yes. Good. That's always a good sign. All right, can you put a link to it or something in the chat so that we can drag it down and clean it up? Yeah. And so, Christian, your druthers is just to have it deprecated and not replace it with the newer one. Yeah, because, I mean, why should people open-shift three? I have t-shirts from them. I do have t-shirts from them. That package is all of open-shift three. Nobody's maintaining that anymore. Yeah, we had a look at that last year when we started to do OKD4. And it was decided it's not really viable for us to go through RPM packaging for everything. And especially because there's rapid changes, right? We could obviously build OC, the origin package for each OKD release. But what we currently do is just take that from CI, like all the other artifacts, and have continuously have the most up-to-date version in there. Obviously, it would just be an extra step that I'm not sure is worth the effort at this time because none of the OKD parts are packaged in a fedora distro way. It's mostly just container images. And there's actually the artifacts image which includes those binaries too that you can extract them from. So I would say we just focus on shipping our artifacts as containers without introducing that overhead of packaging in RPM. I'm pretty, well, I wasn't gonna ask for everything to be shipped that way, but I'm pretty sure even OCP has open shift, has the clients shipped as an RPM for people to use it that way, because a lot of them consume it and deploy it onto machines for that mechanism. So I know at least there is definitely some used packaging for the clients. Last time I went through the OCP stuff, they were definitely telling me that the client packages are, the clients are available as RPMs that can be installed on workstations for mass provisioning and fun stuff like that. It'd be 311 too. That would be bad, considering that doesn't work with OCP-4 at all. We'll see what we can do. I'll see if I can track down the guy who's responsible to RPM and just ask him if he can either depreciate it or give us his opinion on that too, but probably just ask him to depreciate it. Yeah, I mean, the other bit about, having the clients in Fedora itself is that Fedora is starting to have open shift-based infrastructure and having the community be able to work with open shift-based applications and stuff. So having the clients easily available through there makes it a lot more straightforward for that to be plugged into basically like a ton of workflows. So not again, so there are compelling reasons to at least have the clients in there, but yeah, we probably should just get rid of the full legacy origin package because that scares the crap out of me though that's still there. Okay, so I'll see if I can track it down and I'll add it to the list of things and the list of things. That is really downstream from OKD, anybody can really, and anybody can step up and package that, that's just the standard Fedora packaging process. So if there's any volunteers who want to tackle that, you're very welcome, we'd appreciate it so. Yeah, we'll see if the original owner is up for it, Jay. Calca? I wouldn't bet on that, but yeah, we can always. I would either, but I think that's how it's spelled. What did Jay? Christian, I saw that you're writing again on the enhancement document recently. What will be the impact if it is granted by your colleagues? What will be the follow-ups? So the main or the biggest part that's still left to do is getting the installer merged and we've kind of just made the enhancement into a kind of a checklist. Most of that is already implemented and has already been done. That's really, yeah, just to summarize all of that. So the installer PR that is linked in the enhancement that Vanim created is the biggest missing piece there. And then we will, there will be discussion either on the PR itself or on the enhancement. I would suggest you follow both if you want to monitor that closely. And because we've had changes in the installer team, there's a new team lead. So yeah, we will see whether that approach is fine with them. It certainly works. It's what we have in the OCD fork of the installer currently and it kind of allows us to build both the OCD and the OCP binaries from the same branch which is, yeah, what we want to do like with the machine conflict operator. So it is not a blocking issue. It's really just kind of tech depth, clean up, nothing anybody really has to worry about but it will help Vanim and me with maintenance because we can then drop the maintenance of the fork that we have. We've been rebasing it. So there isn't really, as you know, just we've been rebasing it at them. Overpushed over as some of your commits, unfortunately with the bizarre support there as well. But yeah, things like that obviously shouldn't happen in the future which is why we wanna really tighten that up and get them. And do you think because I saw that you proposed to test on the release on prow is, does this include that OCD is tested on more platforms, not only on AWS to catch errors on the other platforms is there a chance to do that also with OCD? Yeah, so I'm not sure how we have configured it currently but when we do this release promotion, right? Our releases are just promoted from PROW from CI, right? So we have this continuous system that always has the newest parts of all the repositories. And then when we decide to push a release we'll just take that most current state of things and tag that as a release and mirror to Quay. So yeah, I lost my, what was the question exactly? Best of us because it's only tested on AWS at the moment. All right, oh yeah. When we do this promotion, we run end-to-end tests and we have the ability to run those on all the platforms where we have OCD available. Except for things like Azure where we don't have an image. We could maybe even make that work there. But yeah, I'm not sure right now it's probably just AWS but we really want to make the releases tested more and I'm not sure they're probably just running but not blocking at this time. But yeah, that is definitely a thing we want to ensure in the future more as well. And that's even going to be easier as well when we have just one installed on. Okay. Thank you. Another question? Sorry, I have a list. I think it's the last one on my list. But do you have any news about, I was looking today in the operator hub and was searching for the Tecton operator. I found one but I'm not sure in the community catalog if it is from Redhead or if it is something different. You know what's the state or is this operator? What's the name of it? Yeah, can you share your screen and just show us which one you're looking at? It's Tecton operator. I can try to find it. Yeah, which version? Because there was an extremely old one there. Yes, it looked not the newest one, Tecton. It's this one and it's the same version as in the rep host. I don't know because the Redhead version is 1.2.1 and this one is got 15 something. I don't know because it's the same version as in the repositories on operator hub but the Redhead version is versioned different. Talking about the operator hub inside of OpenShift, you're talking about operator hub.io. Yeah. Yes, because in the operator hub, the one integrated in, okay, yeah, I did not found it. Oh, this is 0.15.2, go to the GitHub page. It says it was created in August of this year. So I would suspect it's not the most recent. Either way, that is an upstream operator that is essentially for running on vanilla Kubernetes. That might work on OKD, I'm pretty sure it'll work but it's not tailor made for OpenShift. All the operators in the catalog that you get on your OKD installation there actually made just for OKD as a community variance of the supported operators. Yes, you're right. And they're still working internally to get a kind of to develop a strategy how we develop things here with our default to open upstream first policy. Obviously that because we have quite a few, quite a lot of teams that make operators and so far not a lot of them have implemented that kind of strategy. I'm working internally to make that part of the default release strategy to kind of have regular and timely community releases there as well. For the time being, the easiest is to build them yourself. I'd say, so you can go to the repositories, there's the container images and everything there and then you can kind of build them from there, create your own manifest. And even that way, we could even allow for those builds or for those package manifest if you create them to be added to the community operators. Yeah, because what we currently don't have is this release automation, right? We have release automation for our internal builds, but then you have to kind of copy everything manually over into the community operators repository. This one, I just pasted it. Once it's added there, it'll automatically show up in your OKD clusters. So what you really wanna do is get all the teams to create those PRs and release to the OKD or to the operator hub catalog that is included in OKD. But you can always do that yourselves as well. If you see an operator and it's not there yet, you can kind of add it to this catalog. Yeah, which would be, for something the community wants to help with, that is probably something they can actually get ahead of us internally as well. Could we do something like a hackathon on community operators post that and then get somebody from the operator? I mean, is that a worthy thing to spend energy on is to build all the ones that we need for once 4.6 is stable to go and build it and then the next iteration of 4.6 would pick all of that up? Is that what you're saying? Exactly. So yeah, I think that would make sense because it's not a super quick process internally to get all the teams to change their release methodology here and because it's also extra work for them. We don't currently have automation for it which is kind of probably the biggest blocker for it. If it was just one more click for everybody that they'd easily be able to do it but it is a bit more work. So I think, yeah, having a hackathon because we can do that work ourselves as a community working group. Everything is open source. We just have to kind of build images, push them somewhere on Quay, for example and then add those manifests to the catalog. So they are picked up by OKD. I did it on my own for, for example for this operator, for the Tecson pipeline operator. It worked very good. I had to push it in my own Quay directory because I for sure have not the access to the official one but if there would be a script that would fully be prepared to and only a guy with the proper access rights can start the script and it will be built and pushed. I think it would be very, very cool. So maybe we can prepare that and only a guy from the pipeline team regularly presses or calls the script. I think it would be a huge step forward. I think in the long term we can aim at that maybe for the time being we should consider creating our own repository on Quay maybe for this working group and kind of manage those images within our group because yeah. Oh, yes. Pushing images to Quay, everything is built internally in the CI system and when they release it, they have a totally separated internal system for building those releases and essentially making those releases for Quay would replicate that. So it's not, they would do it but it would have to be super easy. There's probably not even, most of these images aren't even pushed out to Quay at all at the moment. So they'd have to create a repository for it and then set up some kind of automation to upon a release or upon manual triggering mirror that to Quay. So I do think in the long term we want that definitely because we want the images to just, be automatically pushed on a release out to the community variant as well. Yeah, but I think for now it's probably easiest to just do that in our own namespace but instead of each one of us doing that, ourselves do it once as a group and maintain that kind of. I love this idea. Yeah, yeah. I'm thinking that when we come back in 2021 at the next working group meeting if we could also invite some of the people who created the operators to come to that hackathon too or the people in charge of the pipeline, building them all so that we could do it as a cross-community collaboration, they can maybe we give them a lightning talk to talk about their operator and then we hack on it and get our build for it. So do something in that spirit so that we get them engaged and they see what our problem is and they see that we're moving forward and that maybe that would inspire them to incorporate the OKD builds in their workflows once they understand what's in OKD and it won't feel so bad like they're doing on their own with no appreciative audience. So let's think about that maybe for February. I was thinking about March to do OKD end user summit but it sounds like people might need these community operators a little sooner than March or some of them but maybe if you could Joseph do that first one with Tecton and put it into this operator hub thing and keep sort of a journal of what you did so that we can turn that into a set of instructions and that might be a good start so that whatever the first meeting is we have in 2021 in January, I don't know what the date is at the moment but we could talk about doing that and then you'd have a little set of documentation and have already done one and we can see if that worked and if we need to create our own quay space for ours but then because I'm pretty tied in with the operator community folks so I could probably find everyone to come and do something like that. And should I use my private quay repository for the operator hub or for the community operator hub? I think for now you can open a PR to the community operators repository referencing the images in your own name space and then review the PR and everything we'll rebuild or mirror the images then into a new location in our OKD in the OKD working group name space and then we'll update, you can update the PR later we don't want to do not block you on this feel free to just open the PR because I think the biggest contents of that PR would be that package manifest and all the bundle, all those YAML files can hopefully just copy from the yeah, from the Tecton CD pipeline operator repository and that is actually what we've been doing in the Windows machine config operator that was kind of the first operator that was just released for OKF or OCPU but we've had releases for OKD for I think more than six weeks now so that was kind of the first operator where we strictly did that upstream first community first paradigm and now we want to take that make all the other teams do it the same but obviously that is a, it is a longer process because even in the Windows team there's one person who just does it manually they build the image, they push them because they have a secret, they push them to Quay and then they update the operator, the community operators repository with those changes but we will have to automate that for it to be viable for all teams it's still like too many manual steps and I think if we can kind of go ahead now and create the first versions of the community operators it'll then also be easier to just tell teams look, it's already in there you only have to kind of update the already existing operator which is much less work than this initial what do I have to put in into the community operator repository but if there's already a version there that can just be taken as a template for the next one. Okay, well I will do that and the very last question, a short one the version from the Teckton operator compared to the OpenShift version is completely different but I don't find any clue in the repository of this version 1.1 used in OpenShift it's, there is only a version 0.15.anything are you sure that the source code from OpenShift of the OpenShift version is still the same? The version, the OpenShift version is the product version of OpenShift Pipelines not Teckton, the 0.... because I think I'm running 0.16.8 right now that's the version of Teckton Ah, you mean, okay it's the operator version but ah, okay, yeah, yeah, I understand ah, okay, yeah, that's good, yeah, okay that makes sense. Exactly, for our community operators we could either have a completely new versioning scheme or we could look at the RATAT operators version and kind of stick to them but yeah, it is really, the versioning is kind of separate from the project of the operator concerns itself with. Yeah, I'm pretty sure Joseph if you look at the branches in that Teckton CD Pipeline operator that's where you see the version that corresponds with what's in OpenShift There's a v1.1 and a v1.2 but if you crack open those files look inside of the... Those have the versions of the images that are being pulled and that corresponds to like the 0.15 or 0.16 Okay, thank you. All right, Joseph that cannot be your last question of the year. I think I took very much of the time of this meeting. It's okay, I think. That is quite okay. So it sounds like the hack maybe merging the hackathon and the contributor summit or doing some two-day thingy might be in February and March might work out nice and let's just bring it up at the next call on in January and see if we can get that moved forward. Are there anything else? I think it's a good idea to invite those operator team have them present their operator and then have them help us as a working group create a community version of it because I know they would like to see it but they just don't have the time as a team. There's nobody schedules that because it's not really... They have features to implement and it's not that high of priority for most teams. So giving them this kind of opportunity, hey, tell us how it's done and we'll do it for you. I think that'll be appreciated by those teams. Yeah, so the other question I have is do we have a list somewhere, a canonical list of the ones that are missing that you can share with me? You don't have to do it right now, but... So we have this wish list that we wrote a few months ago, probably. I'll try to find it. If you can find that, then I'll look and after the January meeting and I'll try that and I'll talk to Daniel Messier who's the PM for Operator of Things and a couple of the... I'll go back to a couple of the working group meetings and the operator framework folks but it's really tracking down the individual project themselves and making sure, like the Rook or the Tecton person and getting them there, but we might get some oversight from the operator framework or support from them as well. So we'll see. I think it makes sense to start with Tecton because then we could use it to build everything else with Tecton pipelines. There you go. So that sounds like a good plan for 2021. Here we go, the hack empty. There you see it. I would not have found that, Christian. Thank you. That's five months old already. Time has flown. Flying, yeah. Yes, yes, it has. Yeah, so maybe it's time to update this too. So we'll put that on this update. Especially the operator is pulling lots of images from Quay. Are this image the same that I used with OpenShift because all OpenShift images are pulled from registry Red Hat IO instead of Quay. Are these only mirrors? Is Tecton modified in any way by OpenShift or is it only the operator that is from OpenShift? Oh, is Tecton's not on this list? I would have expected it would have been. It is, it's on the bottom. Oh, okay. So what do you mean? The operators aren't, they're not part of the payload. They're not part of the core OpenShift. So the images that are referenced, they can really live anywhere for operators. You can push them to Docker, Hub or to Quay. Yeah. But do we have to build them on our own or can we rely on that that the images are the correct ones? We'll have to build them ourselves. There is no automation as of now. So we have some, we have some automation for this in the Windows containers, Windows Machine Config operator now. But even there, we only use that for CI testing at the actual release is another build of that. In the future, obviously we wanna kind of mirror that from CI ideally, but yeah, there is no, like, we will need to build automation for this for it to be viable that all the teams just do it. But right now it's a few manual stuff. So we currently just rebuild the image, push them. And yeah, there is no automation. Hecton consists, I think of 20 to 30 images. So if it means we have to build everyone on our own, it's a huge task. Okay. I don't think, we can follow up on this later, maybe offline, but I don't think you're gonna have to build all those individual images. So really what we need to do is build the operator to reference those images. If we wanna use the ones that are already in Quay.io. Yeah, that was my question. Because that's what, because I actually built a disconnected install for Tecton version 0.1.18 using the, it's the files in the path. I'll post the path. So I basically pre downloaded all of those. It would make things a lot easier if you could use. If those component images are built on CI, we can mirror them. And then we'll just have to rebuild the bundle image with your references to all of them. Yeah, thank you. That's a bundle image. That was the terminology I was searching for. This is the thing that has all the manifest in it of what to pour. Yeah, I have done the same recently and it worked great. But yeah, it worked with the Quay images. I had only to build the bundle image. And with the reference, I changed the reference in the cluster service version to images in my Quay folder. That was a step I made. It was a simple set command in some of one of the scripts and anything else worked out of the box. So I was hoping that that is all we have to do. Get it working. One thing that is different is the operator that OpenShift delivers. It also creates some scaffolding. Like it automatically creates the pipeline service account. It creates the OpenShift Pipeline's namespace where the link that I just sent you here, it's more generic. It creates a Tecton Pipelines. It does not create a service account. So we probably want to take the code of the OpenShift operator and use it to build the bundle. So it creates all the boilerplate as well. So quick question. So this little journal or documentation that I'm asking Joseph to create on his journey to build this Tecton Community CD Pipeline Community Operator, would that be considered a recipe? Oh, yeah, absolutely. We need to get our recipes space going. I created the organization and it's empty still. I know. And Jamie from UMish was gonna do one on UPI or IPI or one of the UPI, I think. But I think he's off on holiday, which is why he's not here. I'm pretty sure IPI stuff is not very interesting since it's run a command and everything works. So it's probably a UPI. It was a UPI one. I was trying to get my acronyms, right? So yeah, so maybe creating this recipe from whatever Joseph's doing and then having that available for the hackathon would be a good way of capturing the documentation on this. And what is the name of the OKD recipe repo? I actually created an organization for us to use called OKD Cookbook right now. I think Diane, you and I are the only members. Can we find the URL for that and pop it in? Yeah, I'm getting it right now. It might be on the community, the websites thing, but I thought it was in the community repo or something, but we'll see. But you get, could I get access to it? Then I, because I have a Tecton repo already prepared in my own. Yeah, so if you put the repo up there and find it, there you go. And let me just see if we can give Joseph, I think since you created it, you might have to be the one giving, invite someone, let me see Joseph. There's some container application scaffolding stuff that I'm trying to get opened up that might be useful to have docs or instances or examples and stuff as OKD Cookbooks if that fits in that purpose. If so, like it'd be nice if I'd access to that board so I could add stuff when that is figured out. Although, admittedly, I don't know what OKD Cookbook is for, so. It's for recipes that tell people how to do stuff. Okay. Open shit. Yeah, stop. Okay, so then what I'm, Yeah, okay. So then I might be interested on that too. And you know what my username on GitHub is, right? Yeah, remind me. Conan dash Kudo. Yeah. And my past, I'm forgetting my password for you. For everyone. Yeah, you can add them. Conan, that's right. Conan the Barbarian, there we go. No. Not quite the right reference, but it still works. It's the detective, right? Yes. See, Christian's got it. If you could also, Charo, can you dig up as well? Jamie McGreed from, I think that's his last name, from Youmish, his, and add him to that, that'll probably help nudge him along. Do you know his handle off the top of your head? I'm looking, I'm looking. Let me just get hubs making me log back in. I love them. I have to remember passwords. Cookbooks. Oh, I need the cookbook. There we go. I exist. I've got some stuff that I'm quote unquote cooking up for, you know, running stuff in OKD or OpenShift or Kubernetes or whatever that might, some of it might make sense to throw into that, into that, or both personally and potentially organizationally. I just created called lab monkeys where I'm working with some guys that we're building out a corkis application with tecton pipelines. Demons. It's, it's intentionally, it's three microservices. It's more than a hello world, but less than. But less than Hellfire. Exactly, right. Less than the kitchen sink. Right. We wanted something that, that people, you know, that people could see useful stuff, databases and messaging and reactive, but not have a million lines of code. Right. It would be cool if the, if we could find lots of redhead repos with tecton pipelines in it. You know, eat your own dog food. Well, I mean, so. Yeah, a lot of the OpenShift folks, at least they're still using Jenkins. Internally, we're starting, you know, I'm evangelizing. So, Charo, this is, this is something that I actually have been thinking about a little bit. So, you know about Pagger, right? Like the, the Git, Forge stuff that we use in Fedora and stuff like that. So, Jenkins sucks. And also there's been an ask to have containerized Pagger. I don't know what I'm doing, but it would be super cool if we could probably like, it's a relatively straightforward Python, Flask based application, minimal requirements and stuff. It'd be super cool if we could maybe work together on doing a thing where we take this application, tecton CICD, containerize, making the application run, using stuff in, from OKD, using, you know, Postgres, operator, stuff like that, all the, all the fancy. And make this like, it's a non-trivial production where the application that people would use and they can see how this fits and it can plug into the rest of the OpenShift lifecycle. Like using tecton CD as the, as the CI for code posted inside of the thing and so on and so on and so on. So, it's been a personal project that like, I wanted to do, but I don't know how to do. So, is that something that might be interesting? Like, it'd be cool if we could work together on figuring out how to do that. Yeah, let's start chatting about it on Slack. Yeah, sure. I should be in both the Kubernetes and OpenShift Commons Slack. So, hit me up on whichever one you wanna talk to me on. Kubernetes, we've already. Yeah. I'm on both, thankfully. Thanks to Diane. I hook everybody up. I need to drop, cause I'm actually doing a corkis demo here in just a few minutes. I have a quick question for Christian. So, thank you, Charo. Have a wonderful holiday. We'll talk to you in January. Hopefully nothing will go wrong between now and then. And just write some recipes. See you guys. Bye, Charo. Bye, Charo. Bye, Charo. Thank you. Good question for Christian on the tecton hub there. Is that, is that brand new? Is that brand new? I think that is pretty new. It's still in preview. And that would be, it's another hub to share things. And in this case, share tecton pipelines and tasks. Ooh, that's definitely getting shared internally. Why wait, I'm gonna, I'm treating that. That is, that is awesome. Looks nice. Nice. So, yeah, I saw that recently. I'm not sure how old it is, but it seems pretty new. And that is, I think an easy way to share those kinds of things without having to have the sources in the same repository of organization. Obviously for some, you know, we spoke, okay, the things we could always create the repository. They should have an official one, official verified community and the spoke. Yes. That would that be my mind. And it should be our bow tie or something. All right. So I think we're, I'm gonna stop sharing. And I think we're almost to the end of our hour. So everybody, even those silent folks. Yeah, and Joseph, I did not hear the interesting part. Is that still here? Ronald, I think to have a video. So much for that, I guess. I think her connectivity basically went to zero. Yeah. So if someone could add me to the members of this rep, it would be very nice. Hey guys. So I think Charo maintains that if he hasn't done it already, he has to do his demo now, but I just sent him your guitar. Thank you. Yep. And I just, for no reason at all dropped and Eugene said, that's the end of the year for us. So thank you all for coming. And we will see you in 2021. So stay healthy and keep right in recipes if you didn't hear the last thing before I dropped off. Thank you all. Thank you for hosting us all. Nice, good Christmas and stay healthy. Yeah, thank you. Thank you, thank you, Rose. Probably last one. There we go. Okay, take care guys. And wherever you are, just be safe. See ya. Thank you. Bye. Bye. Bye.