 Yeah, I would have to rejoin but I could show a single picture. I'll be right back. Okay. We'll give him a minute and I'll turn up my volume. So the other thing that I wanted to do is to get people to sign up and I will create a little form for that for the August 17th live demoing of each of the different platforms, okay D for running on that and set it up. So I'll send an invitation and a note to the mailing list to get people to sign up and volunteer for demos demo sessions. Does anyone have an idea about how long each of those sessions should be should they be a half an hour or one hour. Does it take a full hour to deploy okay D say on GKE or and talk about it. I think one hour will be sufficient. One hour is generally the case. I mean it takes 12 and a half minutes just to deploy it. A, not 12 and a half, 12 and a half of one node, a single node, but about 32 minutes for the three node reduce setup. Yeah, I'll actually test a bare metal mirrored install. I'm going to time it what I've been doing in my lab because it's not it's not too terribly long because it's pulling from a local registry that does speed it up some when it's not having to drag the images across the Internet. Yeah, so and then like if I make them in one hour slots and we just schedule them the whole day and live stream or however long and however many we have then in the extra segment. And then have Q and a on whatever the like whether it's Azure or GKE or AWS or bare metal or whatever. So that that's my goal and I'm sticking to it for now. And then I'll create a landing page for that sort of allow the OpenShift Commons gatherings and we can. But I'll do it off of okd.io, as opposed to off of Commons gap the common site and people can come as they wish. So each hour would have to have like the first five minutes might have to be what is okd by each of the the person's just a little, you know, a couple of minutes about what it is. And then go into the demo and then we'll have the end of the day and this is what Diane being sneaky will have, you know, six or seven videos of each of these topics to add for people to use from the website. So I am my my motto is renew reuse recycle content everywhere. And that's really what I'm going for. So Vadim you should be back in now Vadim and I see Christian is here. So how about if we kick off today's meeting with an update on where we are any feedback on the GA and any engineering stuff that we need to be aware of. I'm sure I guess I'll go first. So let me share my screen so that we would see some stats. There is. You might know that if you're using Red Hat's pull secret, OCP and OCD as well are reporting data using telemetry back to our servers. And we are using it to build a very useful that so here is the stats for the OCD for the last week. The orange graph here is the GA release so we can see it steadily growing. And others, for instance, yellow is better five, about 40 active clusters right now. And the blue and green are better six and better four respectively. And some of something like five are C clusters active. Based on that, we can say that upgrades not very popular, which is a bit surprising, but yes, that's what we should improve on. And we've got a lot of new installs which are persisting, meaning the number doesn't drop so people just don't destroy their clusters, which is pretty good, I guess. We have no clue how to estimate if a fake pull secret has been used, but I am assuming the numbers are similar. On the issue side, I don't think we have any new interesting bugs. I think the only one is that you have to create workers twice. You have to run the create manifest twice because of the OCD specific changes we introduced. We can fix that issue a bit later. And that's pretty much it, I guess, from my side. Christian, do you have anything to add? I don't think so. Honestly, I've been head down in the code for the last couple of days ever since we got GA out. Not much more to say from my side. I'm definitely interested in feedback from folks that have installed it. Maybe one thing, we're in the middle of migrating the OCP installer to Ignition Spec 3. Very soon we'll have RATAT CoroS with Ignition V2 that supports Spec 3. And then the OCP and OKD installer will already be much more aligned than they are right now. The MCO we have successfully merged together, so from 4.6 on, there will be just one branch. Vadim and I are currently working on figuring out how to test both OCP and OKD from the same branch there. But yeah, we'll figure that out. Right now, obviously, we're in OKD 4.5 GA. I just want to say I'm very happy about that. Thanks everybody who, thanks Vadim, thanks Diane from my side. Great work and especially Diane for keeping us on edge with this and Vadim for just doing lots and lots of work there. I think we have a question on the telemetry. Does it skew the results of your telemetry if we create and destroy multiple clusters over a period of time? Or is the telemetry counting actual active live clusters? This graph counts active live clusters. CEI is doing the very same thing like creating crazy amount of clusters meanwhile, so we can see some jitter. I think we would come up with a better graph showing if a cluster has lived at least a day, we'll keep it in the stats, but that's something we should invest in on OCP side as well. Okay. I flashed it up on the screen for a minute, but I just was going to share as well. And I'll try sharing again now. I apologize. I'll show everybody that right off the bat. The survey that I went, I sent out about adoption really was meant to just sort of get us a baseline right now. And so I can redo it in six months or three months or whatever in a cadence so we can sort of watch this going up. And I'll share the results with it here. I haven't done any really deep things, but some of it's pretty obvious. And I think a lot of thank you a lot of the responses came from people in the working group. So, which is natural, but there are a few outside folks as well. And that's pretty, pretty, you know, a lot of it is where just it's very early to stuff. And what I was really interested in is what people were looking for in terms of, you know, what we can do as a working group to help them. And maybe developer workshops, operational workshops and as always better documentation on the OCP on the OpenShift side. We are also seeing a lot of asks for help for migrating from three to four. So that didn't surprise me at all. And, you know, there was, you know, some basic stuff. And the lack of a dual stack support was a couple of issues. Yeah, but it's really still pretty early days. And if you guys saw my tweet with the survey in it, if you could retweet that on your Twitter thing so we can get more people in the door. But it is, you know, I'm not surprised other than the fact that the colors don't coordinate with the words here with the graphics. But I can fix that too. So I'll send a copy of the results here, sans anonymize. But it's a pretty interesting feedback, I think, and it's a good baseline. And, you know, as we don't do any gatekeeping on OKD, who downloads it, who deploys it. It's very interesting to try and figure out who is actually using it out there in the universe. So the surveys will just keep repeating. And if there's other questions that I should be asking, please let me know. I based this on one that we sent out to some open shift folks. So I could compare open shift folks to OKD folks. So that that's my bit for the day. But there's nothing here, nothing here hugely shocking from my point of view. I'll let you all all gander at that and share that with the group. The thing that I, I know I heard from Joseph post that was some conversations about the operators that were available only for OCP operate operators. And I'm wondering if we want to if Joseph you want to express what you need here and maybe we can figure out if that is this working group or another set of resources that we need to attach to that. A colleague of mine was using a few operators like the serverless operator without knowing that it is only available with a subscription. Because it's very easy to get an image pool secret and we didn't understand that it's only for trials. And now after the GA of OKD we tried to set up everything in a clean environment and were surprised that some of the operators, the serverless and the service mesh operator weren't partially available for OKD and this was surprising for us because it's not so clear if you install OKD that you see what the limits are, what you're allowed to use. I was talking with Vadim about the service mesh operators. There is a substitute called Maistra. But I don't know how, how often it is updated. It's a little bit behind in its version behind of the service mesh operator from Red Hat. The serverless operator is not available in the community version and I think it would add lots of value to OKD if they were available because they are based on open source projects. So there is a little understanding from our side why they are not also maintained the same fashion as OKD with images that are the same as the OpenShift ones, which is great. The images from OKD and OpenShift are, I think, almost all the same and it would be great to have a similar situation with the most important operators. So one thing that kind of threw me for a loop, I looked at what the operator hub presented to me. It was, it's kind of empty. There's not a lot available to you and that's actually more depressing and disheartening than I imagined it would be. Because during the betas and the nightlies, and I've played with a few of them here and there, it looks incredibly full and incredibly functional and you could do so much. And now you can basically do nothing. There's not a lot available to you and it doesn't look great. I don't know what's behind all that and why or whatever, but it's kind of sad because there's even operators referenced in the documentation that you do not have access to. They do not show up and that's really not great. We won't ever be able to make you happy, Neil, I think. No, but kidding aside. Wow, I still hurt Christian. No, that's definitely a thing we will look at now that GA is out. So one thing that isn't super visible to the outside is that internally at Red Hat, that's different groups, different teams working on. So we have the core open shift, which is OKD, which doesn't include any of the operators that are on operator hub available either for free as community barriers or by subscription. So now that we have the base working, we can actually approach the teams that would be working on getting that to work on OKD to actually make that. So we will do that and yes, we will follow up on that definitely. So I think the the Qvert operator just merged the PR last week to make it work on on OKD. So I'm not sure whether they'll be promoting that to operator hub right away or whether it may already be there. But yeah, that should technically work now and we'll follow up to make the Qvert operator and the serverless Maestro operator, well serverless and Istio operators also available there. Yeah, definitely. I agree that is a very good use case and we should deal around that. Is it possible that they also get built together with with releases of OKD so they are. No, no, it's not that we have completely different life cycles there which is actually a feature because their services and not part of the core so the life cycles are completely independent of each other. One thing that I was a little surprised was, and I see it right now when I look on operator hub I owe the website and it's there, but like when I when I looked in inside of OKD, like the Rook operator for doing stuff as the back end for your open OKD was not available and that actually kind of threw me for a loop, I kind of expected that to be there because a lot of the documentation leans very heavily on saying hey, you really should be using sep for the storage and I could do no sep and that was a little weird. But that was that was the biggest glitch that I saw the difference here is that operator hub lists Kubernetes operators. They are considered to be upstream for people who do Istio and Maestro operator and they test them pure Kubernetes and they expose it as a Kubernetes operator. The problem is that some of these operators are known not to work on OKD because of FSTCs because of other issues and so on. This is why they are hidden from the community side. This discrepancy is also different from OCP where we package and we can prepare a custom version. So in the end we had a chat with people from operator hub and they said that it's mostly a problem of time of the team. They are unable to manage three different essentially different streams of the operator and we are working with them how to introduce community who would support Kubernetes versions who would support OKD and so on. It's a very tricky problem and we're just trying our first steps with KubeVirt and Image Streams on Fedora. Hopefully the results would be very positive so that other teams would adopt it. But we are crossing into the territory of operator hub teams. We cannot tell them how to live their lives. But you know what I also work on that project too. So Diane can. I'm not quite sure which person you were talking to and I'm glad you've already talked and I really want to make that distinction operator hub.io is Kubernetes generic. And so a lot of them haven't been tested. And so and there are thousands more out there. I just haven't done a lot of outreach to populate it yet because some of the there isn't a lot of automation behind operator hub.io to be quite honest. There are humans and and testing them and you know there is no certification process there at all. And really what operator hub.io is is basically just a catalog that you know you could stand up. Anyone could stand up their own catalog and put a UI around it. So it's a pretty simple website. And the front end is open source as well for operator hub.io. The whole the whole thing is yeah. Okay. So yeah so if you want to put so you know there's there so the operator wish list for okay D. Yes that's a really good thing maybe rather. So that would be helpful for us to prioritize because we want to get there eventually so we'll just have to keep bugging the teams and maybe put some of our own work hours into this. But there's many operators so if you could please add all the operators you want to see which are, you know, the most urgent for you into that list we that would be helpful. Yeah, so I'll add it to the community page as well. So here let me see if I can copy that yes. I think Neil is typing as we speak. Yeah, no Neil is not. Somebody else is building. Yeah somebody else is doing it Neil isn't doing anything because Neil doesn't actually know what these are all called. Neil you can get the Rook operator deployed using the YAML file. Okay, yeah, open shifts and I dropped a link to a copy of it that I made that I did. Okay that'll help. Yeah, because I'm, I'm now starting to look at what it's going to take to do the thing to like replace our open shift origin. Cluster with OKD for and so it's going to, it's going to be interesting. So the preliminary preliminary exploration has begun for doing it for reels. So that'll be helpful. Because this time we want to do it kind of right rather than what we did now and I don't I'm not proud of what we have right now. Yeah, I think and Joseph you when you and I were talking about this it was mostly the service mesh stuff that you were interested in first and serverless. Yes, but serverless I understand at least the service mesh is available in a community version. It's not not not up to date I think but at least several efforts to publish it to the community catalog which which is great. But yeah, but I don't know how often will it be updated or how good is it tested and the same is for every operator in the operator for sure. But this would be this was would make OKD similar as feature rich as OCP I think which would be great because we were waiting so long for OKD to have a service mesh. And now it is not. Yeah, you know, it's not obviously obviously pre was supported. So, you know, defeat from the jaws of victory and all that. Is the is the developer content that sits behind the samples operator and kind of the same boat. No, it's it's worse off the samples operator is nobody. Nobody has any samples to provide to begin with the the samples operator for OCP is populated with a mixture of UBI and non UBI content and separating all that stuff like I've looked at it personally like look. Separating all that stuff is complicated. It's probably a lot easier to go back and start and build up content based on fedora base image fedora base image and sent house base image and start putting together a mixture ourselves because the stuff that they use for OCP is like not reusable at all. That's really the samples operator framework is great. The samples that it provides are not usable for non OCP users. So, so that that's the problem with it. But anything UBI based is distributable right. Careful careful careful anything UBI based as long as it doesn't layer on top in unintentionally. And that's what makes it like a little bit of a trap because if you build something using UBI images on top of a rel host your rel certificate your rel subscription populates in and activates the extra content automatically. And so it is unless you explicitly do work to make sure you don't include it you you leak in rel content and and with the way that UBI is currently made I am not confident that none of those samples don't have any non UBI content. So, so that's why it's much easier for us to be let's let's just make it ourselves in with the system that literally cannot pull from rel. We have filtered all the images which are containing non UBI rail packages. That's basically open stacks ironic. Yeah, which is yet another thing we have to get built. Okay D which involves RDO stuff and that's just another. Who is the point person that you're talking to Christian for the RDO stuff, if anybody. Nobody I don't know. Yeah, I think it was Steve Hardy. Yeah, basically folks working on metal cube. Okay, we should contact them directly. Yeah, I think that's probably our next next step is maybe to sort of. Let's let's get that wish list together if the NVIDIA if it's just the drivers and then we can try and figure out who the who to coordinate with and put names next to those of red headers or NVIDIA people or whoever it is that's in video rather people. And figure and move that forward because I think that's a significant piece of piece of work on a lot of people's parts. And then it's ongoing maintaining those things as well so you've got to get buy in for them to not just do it once, but to do it continuously. It is also unlikely that we will be able to make the NVIDIA GPU operator work. Like, just from a practical perspective, it is unlikely that that we can ever make that work, because my understanding is that it relies on the stabilized rail currently be I to function properly on rail. And I do not know of a good way to make this work on Fedora. So I'm not going to I'm not going to offer any hopes about that working right now. Why we can build drivers using the KMS we're using stable F 32 and not using some weird. Because the problem is, I don't know how you're going to make sure you match with the running kernel as things move forward. We'll build them on host we would build them in containers. And if UEFI is activated they won't they won't load. That's not okay these problems. No, but it's your problem with the GPU operator. Like, they don't load, so it's not going to work. It's it works in rel because there's an ugly, very hacky, terrible thing that they've done to make it so that it works with even UEFI mode. But it will not work in Fedora right now. I don't currently have answers for how to improve that, though it is something we're tangentially looking at in the Fedora workstation working group because it's causing other problems like. Hey, people who put Fedora workstation on laptops with Nvidia GPUs, they enable the driver and it doesn't do anything. So there's problems to solve there. I'm just I'm just not giving I'm just giving this warning that it is unlikely that we will have the Nvidia GPU working in all cases. Like in Azure, for example, it's just not going to work because of because of that. Beyond the list that we have here now, are there other ones? I mean, someone was just asking for what did we filter out? And maybe that he not write this instant, but if you can grab a list of what got filtered out that might also be a thing, a thing to add in here, not as a wishlist but just for reference. I don't see. I don't see reason why those should be filtered out. All of them are optional. Some of them might not work in your setup, but that's a different story. But for us, this list is helpful because we can start contacting teams and ask them to implement to revive their OKD support basically. If we get some of them, that's great. We don't commit getting them all of them by I don't know next week and so on. That's just not going to work. That's outside of our reach. And I was just going to ask a question. The service mesh one, if I brought in, say, Kong from Kuma, Kuma from Kong, let me get my names right, or Tetrade. That's not the same one as the one that we deliver with Operator Hub inside of OCP. But does that help at all? I think the more the better. Yeah. I'm just thinking there's at least that I know of two other envoy base service mesh providers beyond Istio. Excuse me that that week that I could ask to see if they will put theirs in. First of all, an operator hub.io and I haven't and I should and then to see if they'll get it and test it on OKD. That's that's another possibility is and we all have been probably watching the Istio K native Google conversations that might not be a bad backup plan to have those available as well. I think this, I think this GPU thing is a very important. Because it's for machine learning, it's a best practice to use GPUs. Yeah, it should really be supported. Yeah. So one of the things just yesterday I did a and ask me anything session with the open data hub folks. And I'm trying to get them to. And I'm pretty sure they already have tried and done it successfully we just haven't demoed it running open data hub on OKD. So that there's a pure open stack open source stack for open data hub. It which is just a reference architect here it's not for ML and AI it's not a product yet from red hat. I say yet because I'm hopeful, but I'm always hopeful. So just just to that as far as I know that's with the office of the CTO and I was approached by someone from octo. And we chanted all throughout last week and they're setting up OKD right now for I think several demos and several architectures. So yeah, that's probably going to come soon. Yeah, so maybe offline Christian we could figure out what those demos are and just get them staged and broadcast them out to the universe I'd like to be on whatever those reference architectures the more we can get that more content that that's great. But I'm thinking the open data hub one will drive the GPU piece. If they if they build open data because they rely so heavily on GPU so that might be a way to a way to nudge that Nvidia people do to do if we need to do anything, or at least get some documentation there. Code ready workspaces. What's the status on that Christian and that Dean. Is there any movement. At all. No. I don't think I am. I don't think I have a concerted that team, but it doesn't require any fancy stuff on the house. So I didn't see any reason why it shouldn't work. That's another one that you can deploy now with the with the YAML files if you go straight to the project. Even though it may not show up in the operator hub you can you can get or you can go upstream and get eclipse che seven and deploy it via the operator. So it's just a matter of packaging. Yeah, cause like just like with with Seth. That's how I've been deploying it is just via the YAML files straight from the project. And apologize if you've already done this. Have you blogged about that or demoed that or anything at all, Charles. It it's in the it's in the documentation for my lab I did drop the links in there. Actually, I'll go ahead and bring this up. I started preparing a poll request to see if you guys like this idea for our O. K. D. site to add a section for recipes. Just little short snippets of how do I install eclipse Che in my O. K. D. cluster or what one of them I've written up is is deploying Seth. Or adding persistent storage to the image registry that kind of thing. Something shorter than the actual documentation, but easy to find. Yeah. So I would love to do an O. K. D. cookbook. Just saying. I did I did a whole bunch of them when I was at active state for Python and other languages with people. I think that's a really effective way to get. Get recipes out there and get people get examples. So that's we could do a O. K. D. slash cookbook and then have a whole bunch of recipes out there. And I think that's a known thing in in the tech world to do cookbooks like that. So. Yeah, put the put an issue in actually on O. K. D. I. O. on that site and and on this and then we can just maybe just do. I can set up the infrastructure for that and people can just do pull requests to add them. As Joseph knows, I just basically merged stuff. Anyone gives me and pray, merge and pray is what I do. So, I'm happy to do that. And then I actually think that would be a good ebook to share, like a little to if everybody brought their recipes together. That would be a really great way to do that. A plus for that, but I'm pretty sure there was someone on the code ready team that was looking at building an O. K. D. Code ready thing and I'll dig up the name from an email. I have it somewhere Christian and Betty and we can figure that out. Someone was working on it. I just think it didn't get published anywhere. That was that was it. So, yeah, so that's I'm going to add them to share my screen again. What else should we cover off here and today. Yeah, I have a small thing to talk about. We are preparing O. K. D. for some of our production clusters. And we found out that we had problems in integrating the monitoring in our environment because we have a central monitoring. Which monitor several open shift clusters. And I was talking with Vadim also in the select channel, but I would like to talk here also about that. We had to turn off the monitoring operator for that. We had to bring our own monitoring stack on O. K. D. Because the operating monitor. Monitoring operator is overwriting our our. Prometheus rules and dashboards and yeah, I'm just asking if it's possible to to turn off modules some modules. You don't want to have for some reason during the installation and without any hacks because in other distributions. Colleagues of me always show me, hey, here's a button. Switch off and you don't have to mess around with anything you don't like. And I think it would be a great advantage if you are possible to do so on your own risk. Sure, because yeah, you are responsible for everything. But it's it's possible to also feed the UI with metrics we have achieved that today. My colleagues that did that and it works. But we had to do a little bit of hacks and I think it's a verse to think about to turn off a few things. You don't need or want to replace with something different because you are forced to. This is a very tough topic. And when when Joseph says. Adjusts in you it means rip it out entirely. That's the biggest problem here because. Okay, he has to have all the features of CP has one of the features of OCP is constant monitoring and it's embedded very heavily in every single part of the product you would other other operators might render degraded if they say my metrics. I cannot inject my metrics in Simeo because it's down entirely. So the most gentle solution right now would be minimizing CMO. That aligns with the goal of code ready containers team as well because they don't need to promise this is which are very memory hungry. The biggest issue with Simeo is not that it's invasive. The problem is that it runs to Prometheus instances. Each of them uses at least one gig of memory. And you cannot disable it because other operators require metrics as well. So we will start gently pushing ideas to minimize Simeo. To the team. And that's your own risk similar to what at city has. You can have a non H.A. Prometheus instance. The problem is that your cluster won't be able to create. Because you're losing on H.A. And we cannot guarantee. That it's going to work. That can be worked around. Yes. We'll see the way it's implemented. But. There are options. And now there. Sorry. No, no, no, go ahead. I didn't realize you were still. So more brutal options available right now is. Ripping out Simeo out of manifests in the release image. You can replace it with dummy. Well, I don't know. Well, you be I images. They would just be there have no manifest. We would say I did my best. I played everything we had. And it would move on. Again, you would have to maintain your own for it for that. Which is not really complex. But that's not okay. The anymore. Another brutal. Thing is you can ignore this in CBO and scale down. The monitoring operator to zero. You won't be able to upgrade because you have overrides in studio. So. Nice options are pretty much very limited right now. Mostly because. To me, it was very almost every core operator is actually core and very critical. We cannot disable them. They swore carefully picked up. But we definitely will work on minimizing its impact. It. Could be a cluster profile basically. Where you have a non H A Prometheus which doesn't take a tons of images. The problem is not the memory consumption, but that you can't deploy a second Prometheus operator because. There is no you can say on which namespaces. The second Prometheus operator should list, but you cannot say please don't listen. It's not exclusive. Yeah. And that's why it's it's hard to set up your own stack. But that's basically a semi bug. Because it should not listen to your rules in user namespaces. It should only be limited to open shift namespaces. And if your solution is trying to list. Open shift namespaces it. Hmm. Yeah, that's a tricky, tricky topic. And the use case is very odd. So we might have to have a chat with monitoring team on that. But yeah, I think the broader issue when you go ahead. I consider this to be a semi bug because it. Should be able to be limited to some part there should be a setting. And if you if your use case would be. Pretty useful for others. I'm pretty sure they would be able to. Use that fork. Use that setting in the worst case. You can maintain your own fork with revases, but that's not really a fun time. Yeah, the broader broader issue here is really that. That's the thing we don't support in any, not in OCP and therefore also not in OCD right. So what I would suggest because it's definitely out of out of what we can do right now and in the very short term. What I would suggest is to. In order to raise this awareness with the team that actually does that just open an issue on the on the GitHub repository for the monitoring operator. Asking whether it would be possible to deactivate that specific part. You know, just as for them to have to have a card that says this is an actual use case. Because right now we don't offer that option where to deactivate. Okay, as Vadim said, because it's very integrated into everything. Thank you. Yeah, I interrupted Cheryl, I think. No, actually, that you guys ended up where where I was going. I was asking more about your specific use case. And that's, that's really the place that I know in in 311 right now, both on the origin side and the lab and in the production side in the data center. We are running to Prometheus instances now. This is pre operator, right? We've got Prometheus that came with the cluster and is monitoring all of the cluster infrastructure and we followed the rules on it and didn't muck around with that one. But we did deploy our own set of Prometheus infrastructure in the cluster that is monitoring all of the apps. So it's it we've got it watching the namespaces that we deploy our apps in and it it's working fine side by side with the with with what came deployed with the cluster. For the apps we have user workload feature, which basically control which spins up in your Prometheus control by simul. The problem is that sending metrics back to a different monitoring system. I guess what you could work is an approach used by telemetry. It sends a part of metrics back to a different Prometheus server leaving simul fully intact. This is how we get those fancy graphs basically your clusters ascending a part of critical control plane data back to our servers using remote rights. So instead of fully removing demio and replacing it with your solution, you could send the very same metrics to a different Prometheus and maintain a monitoring system based on that. So is there so this that you've put in this cluster monitoring thing is that that's definitely being something being worked on by the open shift engineering team. They didn't mention that they would start working on it, but they are aware that this case exists and it should be considered. I'm pretty sure the answer would be we won't disable the MO ever, but I'm hoping they would come up with some alternative solution. And if people noticed in the chat, I put a link to a form, which I also put in the community page. If you could sign up and tell me which one, which sessions you'd like to do and what time zone you're in for the August 17th event. I will try and work a schedule that fits to your time zones. I'm pretty sure KubeCon is running on EU time. So it'll be early for me on the West Coast, but please just do fill it in and we'll try and do that. And if multiple people talk about the same platform, like five people want to do it on, you know, AWS, we'll get you all together and you can chat about it and be in one hour and one person could be the driver. So we'll figure that out too. There you go. We're almost to the top of the hour. I've got 10 minutes left. Is there any good news anyone wants to share with us? Anyone deploy a production workload on OKD for yet? I'm hoping I could do that next week. I'm replacing my home cluster. I haven't started yet. Brian, do you think this account is a production workload? As long as somebody else, except probably for you depends on it, that's pretty much production to me. There you go. So Jamie has said that he is. Jamie, where is that production workload? Is that another home system? Well, if you do a production workload there on vSphere, you will get the prize. What is the prize? I got to make up first, I'm figuring out the t-shirt situation. We'll figure it probably a digger at you, Mish. If you are talking about a prize, we are shortly before getting our OKD for cluster in production for internal usage for different teams. And that's why I'm asking about this TPU thing because I think it will come in the midterm. But yeah, we are planning to getting GA in the next very few weeks. And as for data, we're working on figuring out when we're going to do our deployment. There's some underlying unfortunate architectural things that we need to fix first. But we are starting to scope and plan our OKD for deployment to replace our open shift origin deployment. So that's a comment at some point. Hopefully soon, rather than later, because nobody more than me wants us to switch to OKD for already. I've got two clusters running in our lab, but that's preparing our team for an OCP upgrade because we're still running 311. Well, I mean, you're better off than me. That's all I will say. You'll say it sounds like Jamie's in the lead here. So, and we may make him our showcase on August 17. Mike, this is for day job. This is not for Fedora Neil. Fedora Neil doesn't have anywhere to deploy OKD. He's too broke and doesn't have computers. There to go fund me. Oh, that's too frivolous to have a go fund me for. So, Cheryl, so I think that one of the takeaways is I'd like to have a conversation on the side with you about designing the cookbook recipe pages for OKD.io and give you proofs to do to do so. And start to want that because I think that's a really great thing. And then please fill out the form that's in there. I mean, if I'm reading here, Fedora had commune. Yeah, that's what we need the anarchist version of OKD anarchist guide to OKD. That's next. All right. Well, I don't know. Maybe Jamie deliver the child first. Okay. Make sure the child is GA. Yeah, before before anything else, otherwise there'll be some other problems in life. So let's let's see what we can do. And yeah, so if you can everybody fill out the form. I will try and create a landing page for the August 17 thing that we can all use and schedule people on and you can see what your time slots are. And we can promote it. It's a bit of gorilla marketing during KubeCon. So we'll have to use our stealthy social channels and everything else to get the word out about it. But in there's probably 70 other things that are happening on day zero at KubeCon as well. But we can rise above the noise hopefully and at least capture all that content in a day long thing and where you're probably where your T-shirts if we can get them printed and shipped in time and maybe set up a store and sell T-shirts or something at KubeCon and pop T-shirts and popcorn. I have a question. I was thinking about that in the last days. I would very appreciate it if we could do some kind of hackathons. For for different tasks which it will improve OKD such like, yeah, this GPU thing to get that working here and propose a POC or a blueprint how to do that I would love to do so because I think we have some knowledge in in this working group to pick out several things which are too hard to solve some alone. Yeah, I don't know what you think about that. I think that's a great idea. Yeah, that's definitely the way forward I think for us and I think it might be something that we can cross pollinate with the operator framework group and you know co-host that if we can get our list and the point people for each of those things that are on our wishlist identified. That might be the good basis for a CNCF now that operator framework is in CNCF and OKD co-sponsored hackathon and I'd be happy to do that. So let's get that list prioritized figure out who's who there. I'll go on to and reach out to the folks on that the operator framework side and once the list is there and see if I can figure out, you know, how we could do that. So that's that's not a bad thing at all. So I'll add that into the list of possibilities beyond GA but still adoption is really where it's at right now and more feedback and adoption when we have the operators will be easier. I think they can do more things more workloads easier. So that's a key piece of it. And, you know, again, creating content and updating doing the recipes and continuing what you guys have been doing wonderfully. Home labs content on live streams, open ship dot com medium, all that stuff is really it's huge. And we'll just keep doing the outreach and getting more bodies here to talk on this call. So with that, who is live lace here this profiling apps for CPU RAM and GPU. Is that a request? Or is that something you'd want to hack on and I don't recognize your name so if you want profiling is a very interesting topic. If we focus on co operators, that would be extremely helpful to the OCP project. But live hacking on that. We would need to grow some expertise on that. Well, that can be done though. We just need more time to contact folks from the core team because I don't really know where to start. I'm usually just bringing out we were log entries to the team and they fix it, but that's not really. That's great description right there. Awesomely formulated log messages. Yeah, all right. So that's a good, good topic for another another session and let's keep adding to this wish list and see if we can't make it all happen. Probably not in the next week, but maybe two weeks from now and you will all hear from me if you fill out that form. And I will send the form to the mailing list as well to sign up for the August 17th and then I'll start a thread with a schedule proposed and people can yay or nay their slots in that thing. I'll figure out what time we actually have to start at. I think it's like 6am on the West Coast. So I love, I love that we will figure it all out. All right guys. Okay. Thank you very much. Talk to you all soon. Thank you all. Bye.