 Let's get rockin' and rollin' here. Yeah, I'm just gonna share my screen. And thanks everybody for signing in and welcome to, and I even wrote 2020 because I just didn't get enough last year. We're gonna change that to 2021. We will properly start off the new year. And I'm gonna share the, we have a little bit of an agenda today. Elmiko, are you here with us too today? Yeah, I'm here, Diane. Okay, cool. So I was hoping that Christian would be joining us too because I think he updated the community operator wishlist, which is one of the things that I kind of need to start reaching out to the operator communities and see if we can find some sort of a date time to do a hackathon on those operators, which is where we left off. For those of you who didn't join our last meeting on the 22nd of December, because you were busy with holiday plans, all of the videos I did manage to get everything updated and all the videos of all the past meetings are now, I think I'm missing one in November. Then I'll just go dig up and put up soon, but they should be all there. So one of the things that we came out of the 22nd meeting was talking about reviewing and the community operator wishlist and seeing if we couldn't set up a time maybe to do something like a hackathon or pair programming with people from our community with an advisor from the operator community about building a community operator for OKD, for the ones that are missing that normally come from there. And I'm looking to see now as Vadim joined the call, you have not Vadim, Christian. I'm gonna unmute people, nobody needs to be muted. But I thought we would kick off with that topic. And unless Vadim, you have an update on the release that you'd like to start with. There any? Yeah, that much just happens in the technical side. The only major change is that our CI has been updated and the new registry is used to store the night place. So that would be registry.ci, openshow.org with instead of previously registry.svc. All the releases are still uploaded to Quay. So there hasn't been any changes there. And so we didn't get chance to merge a lot of code recently because of, well, everybody's on PDO. We'll catch up on that in the coming up weeks. Okay, good. I think that's all good. Do you have anything, Christian, from the engineering side to update people on? Where's that, that sound like a warning signal. This meeting will self-destruct in five minutes. I know, I always worry about that. So we did get some storm warnings here. So it could be that, but I don't think that's on my machine. So Christian looks like he's having a little difficulty getting in. So maybe instead of going through the operator wish list, we'll wait until Christian gets in here. Maybe have you read that? But I found out that also for some operators, the Quay issue, I think with this mirroring bug, this wrong manifest in Quay is also affecting. So just to be safe, let everybody know that if you try to mirror some operators on your local registry and the private registry, maybe it won't work because they're wrong manifest is used on newer Quay images. It seems to happen, it happened somewhere in November, I think. Correct. Basically all the new stuff we built from November is the same as it. I'll add a notice to the issue in the FAQ probably. Yeah, it's a funny thing in question marks. Is that this issue, this pull request you mentioned in Slack that is taking care about the problem was running in the image pull rate thing from Docker Hub. They have limited the pull rate recently and I think it's a bigger issue for OSP projects. That has been fixed in OpenShift CI. We used to pull images directly from Docker Hub. We don't and I'm more. So the next three tests should make it go away. But yeah, November has been a rough month for CI. Okay. All right. All right. So well, Joseph, while you're talking, one of the things that you were doing is you had created your own community operator and you were thinking about, and I know we had a holiday writing up a recipe or how you created that, Tecton one. It was one, yeah, I created a repository. I put a link also in Slack with the ideas that we create for all operators we are working on and where we cannot rely on the teams that they do it for us. That in the meantime, we write descriptions of how we can build it on our own. So I have done that for Tecton already because Tecton is the simplest one. You have to change nothing for that. On the code base, it's very well prepared to build it on our own. The problem was in the last meeting we had the idea that I posted to some repository in Quay, some Joseph Meyer repository and I had a few questions because I also wrote that on Slack. I had a good day where I wrote all that. A few questions where I was not sure if I'm allowed to write that Red Hat maintains that. The ownership is not clear and so on, I was not sure if I'm allowed to do that and that's why I have not opened the pull request before we answer this question. Can you throw the link to that in the chat here so that I can keep a reference to it? What I was hoping to do was use Joseph's exercise and write up as a template for how to do all of the other community operators too. So maybe and run through that and any issues that you came up with so that we can rinse and repeat that as sort of the process for building community. Dan, I think for the time being we could just put OKDE working group in the kind of maintain us view of there. Do not mention Red Hat since we're kind of building these on our own for the time being. Okay, I'm gonna stop sharing for a minute and see if I can find that note. So if you can throw that in the chat, the link to what you did. So what I think you're back here now, Christian, you are having some technical difficulties there for a minute and you went all purple on us and gave us a storm warning signal for some reason. Yeah, my camera seemed to have, I don't know. Yeah, it's 20, 21. It's already about something. What do you expect? Thanks, Joseph. Can I get you, Christian, maybe to share your screen and walk through the operator wish list and we'll drive that and then we'll do a bit with Mike McEwen's must gather conversation and walk through the rest of the few things here that we have today. Can you see my screen? Does it work? I will see. Yes, yes, yes. Okay, so I've updated the operator wish list a little bit. And yeah, really I just found some of the operators are already available even in the community catalog which we install by default on OCD or also in the upstream catalog. There are a few operators that are currently built for any of the public catalogs which are these and there's more of them, I think. Yeah, but for the wish list, we actually have quite a few in the community catalog right now. So for example, the Maestro Istio Service Mesh Operator was updated recently and all of these other operators are available there. For the upstream catalog, I'm not actually sure whether that's installed. I don't think it's installed and enabled by default on OCD, but you can do that and then you'll also have access to these operators here down here, among the more others. Christian, the problem with Maestro was that the versions did not properly work together between Maestro, Kiali and Jäger. We had to patch lots of things to get it working. And I think that with the Maestro operator it actually has a dependency on the Elastic Search Operator, I think. Yes, and then Kiali and Jäger. Yeah, and obviously we'll need to make sure these versions match up then. Yeah, that would be... So the plan is still, we have some kind of strategic work going on, but then also I'll just open issues on all these repositories here who can request the teams to add those builds to Operator Hub. Yeah, and so maybe one of the... If we can add a note to the Maestro, Kiali and Jäger in the list that they're dependent on Elastic Search. So has everything that's in the section that's available in Community Catalog been tested? I just did the OADP, is anyone used any of these things on OKD or know of anyone who has? Because I think one of the OADP... Which section? The last one on the list was the one I was talking to someone about yesterday, OADP, the conveyor one. I think it works the other way around. So folks are publishing Kubernetes operators, but we don't see them by default in OKD unless they tested on OKD and they know that it explicitly works. So everything ending up in community should work. Except for the top three, Maestro, Kiali and Jäger won't work unless we have Elastic Search. Kiali and Jäger will work independently of each other than with this Maestro operator. I don't... Yeah, they've built it and released it, but they have a dependency on the Elastic Search operator, which I think obviously they'll take the downstream one from Ratat, which we don't from OpenJet, which we don't currently build and release to operator up. So yeah, the Maestro one probably needs some work there, but then all the others should work, especially those that are in the community catalog. They can be installed independently of each other, except for the Maestro one. And yeah, you should be able to test them. For those in the upstream catalog, they might work on OKD and especially those I've listed here, I think there's nothing that goes like against installing them on OKD. I would expect them to work. And if we could get some people to actually just test those, that'd be great. And then we could easily kind of copy those over from the upstream catalog into the community catalog as well, without having to actually rely on the maintenance work here too much. I think there is no... Oh, sorry, go ahead. I tried out the Cep operator, but not from a catalog. And I don't know if there is a cluster service version, what's the name of the CRD cloud, I think, but... The service versions of the operator. Yeah, and I think I did not found any CSV for the RookSep operator. Maybe we have to write our own, but this should be no big problem, in my opinion. It's already in the upstream catalog in the operator repository. That should be there. It's just not in the community catalog, so it doesn't show up in OKD. But if you enable the upstream catalog manually by adding that subscription, then that'll show up as well. And you can see the cluster service version in the community app repository in the... Yeah, sorry. I have a delay, sorry. And the idea is that we tried out from the upstream catalog, and if it works, probably then we can create a pull request for the community catalog. Then we can just create a pull request to copy that over to... We should be able to use the same contents there, I think, and it shouldn't be a problem to have those in the same catalog. But yeah, we can create an APR and then have that reviewed. Okay. So I'm going to ask a question about process, too. Once we've done that ourselves, say we take the RookSef operator from the upstream catalog and make a pull request, like maybe Joseph does it, to put it in the community operator. Isn't the end goal to get the RookSef community in their CI, CD release process to make that pull request to us? So we still need... Absolutely, yes. To communicate with them and tell them what we're doing. So if it does work, rather than us testing it, how do we communicate, how do we want to communicate to... I know the RookSef people, and because I've made them talk a bazillion times. So I can reach out to them if someone writes that pull request in their repo. We put it in our community repo to start this. But what I'm going to do is... Yeah, so definitely we should get the operator maintenance to do that, eventually, for all operators. Right now, I think the biggest hurdle here is that that isn't really an automated process to release to the operator out. So there are a few scripts around to generate that data, but since that comes from a different repository from the repository where the individual operator codes. It's updating the operator out each time is some... Yeah, involves some manual steps, mostly. And I think once we get that to get more automation for that and kind of can integrate that better with CI's, with these various CI systems we have, kind of just promote some builds whenever the maintainers decide to promote that to operate out, that would be great. But right now it's kind of a manual process for most maintainers, which is a way they don't do it too often. Hey, Christians, John Fortin, is there a instructions on enabling the upstream catalog? I just looked real quick and I see the Red Hat catalogs in the community. I don't see a way of enabling the upstream catalog because I'd love to test the RookSef operator. So right now I'm doing it by hand. Enable it inside of OKD? Yeah. I mean, you said it in the... There should be some documentation on the operator SDK documentation on how to enable secondary catalog sources. But where is it? I mean, you say the upstream, but I don't know where that is. Where is the upstream catalog? The OKD, is it, are they pulling on? You have a link there? Throw that link in chat once you're done. I'll throw it in. And maybe add it into your GIST file there. I'm not actually sure. I'll find it. Okay. Yeah, because without knowing how to implement it, I can't test it. Yeah, I think there's some, before we could do a hackathon or anything, that's kind of why I was trying to tickle or tease out of the tech time work that Joseph was doing, sort of a template for how to do this. Yeah, so these are the sources in the community operators repository in the upstream community operators directory. And yeah, I'm not sure what the catalog sources would exactly look like. We can't really add them to our machine or as content. But maybe you can find a link where we do that. Yeah. Basically, I don't know. It's a great documentation bug. In short, operator framework folks are pushing three different container images with an index of the operators. And we know too, we just need to find out which image is being pushed for upstream catalog. Let's file a bug and track it. I could find some experts in the field. Yeah. So Joseph is in the chat, is suggesting that we should create some notes in the OKD cookbook, how to build an open shipped operators for OKD. And I think that's really what I'm getting at, is that once you guys create that documentation, then rolling out to the wish list that Christian is curating there, is that wish list for other people, are there things missing from that? Is anyone, I know, Joseph, you've been watching this closely. I will actually open an issue on our OKD repository. Actually, the community repository for this, so we can kind of have people throw in their own suggestions. Yeah. Diane, you were asking before about if anybody had run those operators on OKD. I've run previous versions of the open data hub operator on OKD. And I think that's gonna, I have a feeling that would be a very popular day two operator, because everyone's gonna want the ML stuff. It's also like a meta operator. It installs other operators of its own. I have a feeling it's gonna be really difficult to test. Like the experiences that I've seen from people who are running the current versions is that like some parts of it work really well. Other parts of it are extremely difficult to operate. So that may be one that it will probably require a lot of testing. Fortunately, the team that's working on it is really good about, you know, responding to requests. Yeah. Is this TensorFlow thing? TensorFlow and Jupyter? Kubeflow Open Data Hub. So Open Data Hub is a project that now includes Kubeflow and it also has like support for Apache Spark, Apache, you know, it uses Strimsy for Kafka and it has a bunch of machine learning and, you know, data specific, data analysis specific packages inside of it. But it does a lot of this through importing other community operators, you know, to do that work. So things like installing Jupyter Hub. I think they have a Jupyter Hub operator for, you know, et cetera. There's a Spark operator that's involved, you know, those kind of things. Mm-hmm. Okay. I think we can also participate in testing that because of the use already a few of this operators you mentioned. Yeah, yeah. So like there's a lot of really popular machine learning and like, you know, dare use the word artificial intelligence type stuff in that operator. So, you know, we just, we get, I see a lot of people wanting to do that as like a day two operation to open up, you know, let their users start working on AI type stuff. But it's extremely complicated depending on what you're working on. Just a warning, I guess. Yeah. It would be lovely. And I have all kinds of ulterior, is if we could get a demo of that by the 28th of January, which is when I'm hosting the OpenShift Commons gathering on data science. Hey, you should reach out to Sherrard. I'm sure there are people who would love to do that demo. I know for a fact, they're like probably eager to get that stuff out there. Yeah, is that Subin? Mo Down? No, it probably won't be Subin. Subin's working on the Toth stuff, but you know, it would be like, you know, someone like Avashak or someone like that who's on the ODH stuff now. But I would reach out to Sherrard Griffin. I think he's still leading that group. So like, I'm sure they would love to get involved. Yeah, no, I think you're right about that. So what I had asked quickly is if there's anything glaring that's missing from here, John Fortin, Joseph, those of you who are out there in End User Land really using this, is there anything we're missing? Yeah, we are absolutely interested in the first section. And I'd love to help and get some running and test them. The bare metal is a bare metal operator. There was, maybe I mixed it up, but there was a bare metal installation, something was missing, but it's not an operator, I think. It's something. Yeah, the bare metal operator is part of the payload, but we're currently not building it because we don't have all the parts to do that with Fedora slash CentOS in this case. Because we're doing that for OCP, for the product, obviously with REL, and we need, I think, the shim binary there, or at least some parts, some image parts for pixie booting, I think. I think it was an OPC thing. It was only something that has to be done. If I remember correctly. That's work in progress. We have open PRs for this. So bare metal is currently not supported as a platform in OKD, but once it is, it'll just be in the payload. There's no extra operator beneath it. Okay, another question. Is it possible, or how do we take care about the image location currently? The images are stored under QAIO open shift or QAIO, yeah, some open shift, specific namespaces, should we open, if we build these operators on our own and manage them, should we create an OKD working group namespace in Quay or how do we handle that? I have no good feeling in using my own namespace because if I overwrite something during my tests, then lots of people will shout at me for a good reason. What do you think about that? I would prefer to use open shift namespace. That would make things a lot easier. First of all, if this whole thing goes live, we will be using CI, meaning we would push images automatically to open shift namespace. So there shouldn't be any difference between. But the manual how to do this yourself and use your own custom namespaces is certainly very useful. But in the future, I would prefer to use open shift for everything, why not? Yeah, that's, I think we all would prefer that. But the idea was to, yeah, release the stress from the teams because we would have to wait for the teams until we get it in OKD in the community catalog, you know? And if we find a good way to have some intermediate process to get it in the catalogs very soon because it's, yeah, in the case of the pipeline operator, it's just you call three or four commands and you have it in your catalog if you want. Oh, yeah, something like staging would probably be useful. I guess so here would be a good start. But we would need some architects and probably folks from operator hub opinion. So your proposal, if you could. I'd say just publish them to your own namespace for now open the PR and once we have that reviewed and everything, we can still copy the images over to some other namespace, be it OKD working group or whatever on Quay before we actually merge any PRs. So I wouldn't block on that now. We can still create a kind of a working group namespace later on. Okay, yeah, I will do that. And I will change the maintainer to OKD working group and that should be all or is there something I should take care about to not get stressed with legal as an icon or as an operator name. It is called OpenShiftPipel operator. Can we name it this way in the future or do we have? No, it'll get changed to Tecton Pipelines operator. It'll turn into something generic or upstreamy. Like Qvert operator is not called Qvert operator for Red Hat OpenShift container platform. It's called something like Red Hat OpenShift virtualization or something dumb like that. So that'll happen with the Tecton Pipelines operator will probably just be called Kubernetes Pipelines or Tecton CD or just something that's useful for community perspective. So I think the Tecton operator actually exists in the upstream and that is kind of the one not adapted by Red Hat. So yeah, I'm not sure. It might be just get a brief fix. Yeah, I'm a process question. So what I would like to use is Joseph's efforts and get that all, everything figured out in terms of naming and what the process is for building that. And I'm wondering if I know we only meet bi-weekly if we could use this time slot next week and I'm not sure what your schedule is like to coach and review what Joseph's done and clean it up and get it into the OKD cookbook, how to build an operators and maybe next Tuesday, Joseph. And I'm not sure who's the best person to help whether it's Christian or Vadim or someone else here and get one done the way that we as a working group want to do this. Yeah, would be great. Is Christian or Vadim, are you available next Tuesday? I am not, because I'm in a all-day meeting next week. But I'd actually prefer to do that asynchronously. If there's a pull request, it's easy to review it whenever it comes up. So rather than do it schedule a time to get it done, just do it. We could remove all open shift references, replace it by OKD community, or simply only OKD instead of open shift. Would that be okay? Or place an OKD before the name of an operator for all OKD operators? If I'm understanding what Christian's kind of getting at here, I think what it sounds like, Joseph, is you should probably just make some good decisions and put it up for review and then we can just argue about it on the review on what needs changing and whatnot. Yeah, we can bike shed on the pull request. Right, exactly. It's like, that's where the argument about those things should happen. I think, yeah, just do what you think is best for now, put up the PR, and then, yeah, we have problems with it. Yeah, we'll just figure it out there. Yeah, the super easy thing to do would be just, said, find and replace all the words open shift with OKD and be done with it. And we can figure out if that sounds wrong or not on the pull request. OK, great. All right, so we don't need to meet next week, which is fine. I just wanted to make that space for us if we needed it. So I'm just updating this here. So because what my goal is, as everybody knows, I always have an ulterior motive, is to get one done all the way through and then hack our way through the rest of the list. And with the Tecton one, it doesn't sound like we need someone from the Tecton community to coach you guys on anything technical, just to get it incorporated as a pull request for them to do it in the future. I don't think so, because I am using it for building stuff already, I did not found any issues. I can remember off. I would suggest that we, if we have to change something on the operators, like I have to change OpenJiv to something else, yeah, I will fork the repo also to the OKD cookbook organization. And we should write the changes. This is a suggestion to this repo. I have mentioned before, just to have it written down in a structured way in one repository, how to build OpenShift operators. And it's only an intermediate step. If we have the CIs running, we can delete it. But in the meantime, I think it's valuable to save these findings somewhere in one repository. I would do it like this. And we should mention this repository, this organization somewhere. I did not find in OKDIO that we have mentioned it, OKD cookbook. I think it's currently it's inside, I don't know how. It's just a landing page at the moment that has links out to some of the repos that Charo has done. So we'll need to clean that up a little bit too. Let's get through next, hopefully by next Tuesday that would be my goal. And then when I come out of the three day all day meetings, I'll take a look at what you guys have done and see what we need to do in terms of OKD and cookbook. The question that's in the back of my mind is once we have it all done once and we have the Tekton one in our repo working well, I think I heard someone say, and I think it was Christian, that we need to make a pull request on that operator, the Tekton operators repo for them to be informed that this is something we would like them to do on a regular cadence or inform us when they have a new release. Yeah, maybe not just a pull request, but we'll have to introduce some kind of mechanism to make it easy for any team to release the current operator code they have. Meaning they don't have to do builds manually, they can just take the latest CI builds and kind of mirror them out and tag release on it and then create automatically, if possible, create a PR to the operator hub, the community operator repository for that new version, the new CSV to be merged into the catalog and to show up in custom. I found out that in the most of the time, it's only variable substitution that is needed because things are hard coded and it's very easy to replace them with variables. Well, everyone will get a little better at building operators than out of this. All right, so that's really for 2021 in January. What I'd really like to do is just in January and February is push through that list and then if we need to set up maybe a synchronous pair of programming with people from the community, from who are members of the working group with anyone from maybe Elasticsearch or whatever to just inform them on what the new process is or the request is and if there are things like variables that need to get changed, start working with them and identify them. So now that I have the list, I can kind of and maybe with Christian and Vadim's help start looking for who is the point person on each of these things and I'll reach out to Sherard because I have definitely a, maybe it's a death wish but a wish to have a demo of the Open Data Hub one. I do have Audrey and Sophie speaking at the January 28th thing but I don't think I have anyone doing, I'd love to have like a short video on using the Open Data Hub operator on OKD sort of like we did with the marathon day we did of OKD deployments is showing them in action on OKD and any tweaks that we had to do. So just to give them a little more publicity in return for their help. So the next thing I have on my list here is like you had brought up on the mailing list of the must gather and OKD topic. You wanna walk through that and. Yeah. So before, before holiday and everything Bruce and I and I think a couple others were talking in Slack about, you know, must gather and kind of like the safety of sharing those things and how useful they are and stuff. And I guess just for a little background for anyone who doesn't know although I'm guessing most of the people here do know must gather is like an administrator tool that we have an open shift that'll bundle up a bunch of artifacts from inside the cluster or turn them into a tar ball. And then the way we use this in the enterprise product is that, you know, we have customers who then share those must gathers with us and we use them to help debug the cluster. This is an extremely useful and I think some would probably say essential tool for us in solving some problems especially with running clusters. And so, you know, the question that Bruce was raising was like, is it safe to create these and kind of share them publicly? Where should we do it? Cause they can grow in size up to four or 500 megs and their, you know, and this is the safety issue was kind of the heart of it. And I think, you know, Vadim and Christian probably know way more about this than me but, you know, my gut feeling is like it's not safe but it should be safe. The problem is there are like container logs from your, you know, from your control plane that are included. There are dumps of all sorts of like configuration and objects that exist in the control plane that are dumped. So like in a, in a, in the best world situation it should be safe to do that. But like, you know, there are corner cases and I've seen bugs that have come up where, you know, some container logs especially like sensitive container logs in the control plane are maybe exfiltrating secrets through their logs and whatnot. If they have the wrong verbosity level set, you know, those kind of things. So, you know, to kind of bundle this all up the reason I sent the email and the reason that I kind of looked into this a little more after Bruce and I had talked about it was I think, I think for the OKD community it's going to be like really important for us to figure out a way that we could consume must-gathers from the community because it will really help us to solve problems that people are seeing. You know, it may not be the best tool for every problem that we have but if we could establish some way to make this, you know, safe and kind of make it like a trusted way for people to do this with the OKD community I think it would be, I think it would be tremendous just to help us out. So anyways, yeah, I wanted to open it up. I think that's a great idea. I think one thing we're going to want to do first is look at examples and make a list of things that are possible issues with what is gathered because right now there's, Vadim seems to know a fair amount about it but it seems like there are some edge circumstances but we need some examples to actually know what those are and under what circumstances that those come about. So I think research would be phase one of this, right, is find out under what circumstances something, you know, determine something that is sensitive and figure out under what circumstances that it appeared. It would be great if it, if only OpenShift namespaces were collected, I think it's enough to do so and no secrets, no config maps and you are fine I think in minimum. That's correct, that's what it does. If we don't, the information from other namespaces is just irrelevant. If I remember correctly, we're stripping passwords from IDP configuration and things like that but it needs to be verified. I don't have a nice setup where we would, it would be visible. I believe random secrets are also not stored. We're just tracking particular secrets we're interested in so it should be safe. The problem is that depends on how much you think sensitive is like we still need URL from IDP provider, for instance, you can be considered sensitive but not exactly. You do grab all the namespaces so, so like if the name of a namespace is sensitive, like there's not much content in it but all the namespaces are there. So for instance, I was using it with students where their name goes in the namespace and then that creates a privacy issue in Canada. Oh, so it's safe to rip it out from the actual must gather you're sending us. I was under impression that we're only getting the open shift, but I don't know, because I'm looking at one. We are usually debugging control plane issues and username spaces are irrelevant. Well, I guess because it pulls in all of the operators that are installed cluster-wide in every namespace. Yeah, it pulls in the cluster-wide resource. So if there's a cluster-wide resource that's exposing that information and this, I think what you're bringing up, Bruce, is something that probably, I don't even know if it was thought about, the idea that you might have personal information encoded in your namespace, i.e. like your name or something, that might be an opportunity for us to do some work around on the must gather side, maybe things like namespaces and other types of user input names should just be changed to hashes just across the board. So there's just no PII in there at all. So all you're doing, because really the main thing for me as a debugger, if I'm looking at one of these must-gathers, I might try to match up what container was running and what area or whatever. So I might see one of those values, and this is exactly to the thing that Jamie was talking about before. These are exactly the type of bugs we see where someone has inadvertently exposed information in what you would think would be a benign resource object, but now it's in the must-gather and it's been uploaded forever, right? And likewise with the logs coming out of, even if they're just the logs for containers on the control plane, we just had a bug recently where one of the verbosities was set wrong. So it was dumping everything from that container. And I think Vadim identified it. So I see him nodding his head. And that was the kind of thing where it was like it was exposing secrets in the logs inside the must-gather. So I kind of got away from what you were talking about, but I think the things that you're talking about maybe give us some ideas on how we could make must-gather a little better in terms of scrubbing out some of this information as it does that. You know what, you could do a document on, oh, sorry, go ahead, Bruce. Yeah, now I was just gonna say that one way to keep the information from going across the internet would be to encrypt it with, you know, like if you have a key pair, then you can encrypt it with a public version and then it's only visible inside of your organization as opposed to the entire internet, which is not perfect, but it's a lot better. Yeah, especially for the OKD community that, I mean, obviously any encryption that we put on it, even if we had keys that were shared with people who were like on the debugger list or whatever that, I don't know that that necessarily creates the trust because those keys would be pretty weak. Someone could share them or whatever, but I like the notion that there's a way we could at least store them at rest in a way that's encrypted. Is there an authoritative document on what must-gather gathers so that we can look at and actually use the way of what word? I looked around inside and tried to find something. I did find a document. Unfortunately, I can't share it because it's like, it's red hat, it has personal, it has red hat corporate information in it. The must, I think what I was thinking as I was reading through this stuff and looking at the must-gather repo, I think the next really big step would be to propose like an FAQ or something to that repo. If it's not there already, just a document in that must-gather repo that says, this is exactly what is collected. That's where it should live for everybody, right? I think that's it. I'm looking at that repo again just to double-check. I didn't miss something. But yeah, I mean, it's pretty barren. I guess there is an enhancement they link to. I didn't read that. That might actually give all the information. Okay, I could do a little more research here. But yeah, I think something there just a list of like this is exactly what it does would be cool. So anyways, I guess the second question that I had for this group and maybe, obviously it's not something we're going to solve now, but something we should think about is kind of like, how would we have a workflow where an OKD user could create a must-gather and then have a place to upload it so that they could link it against the GitHub issue they opened up or something like that? Because I don't think GitHub's going to allow you to push a 500 meg file there. But if we as a community had some way to kind of make this easier for people, it will definitely help us out later when we go to debug these things and whatnot. Yeah, just... I don't think we can have an official place where folks who would upload must-gatherers, GDPR and that California thing specifically propits that. And the Canadians. Yeah, everyone's going to have data protection and data sovereignty things. This is just not a thing you can do. It's a great idea. Yeah. Yeah, it's an interesting... I think you've given us something to think about how we could possibly do that. But I also... I mean, we have the OKD website. We have... Right, but... And that's, you know... I guess given my background in kind of dealing with PII in machine learning and how people are using encryption to solve those problems, it does make me start to wonder about is there some way that we could change the must-gatherer to more aggressively scrub out information and replace it with just hashes that can be linked? Like, is there a way to take that must-gatherer tool into the next generation of what data governance really means? Because like for those of us fixing the problems, like we don't care about the names and all the sensitive information there does not affect me one way or the other. It could all be changed to just hash values that I match up and I can still achieve the same thing. So, you know, like, I think there's probably some middle ground we could achieve where those must-gathers could become totally just neutral, you know, where it's like... It's not going to mean anything to anyone else than the person who created it and the people who are trying to debug from it. Yeah, just to give you a good example of this. So, using it in a university environment, people are going to name their projects often with their unique name, their username at the university, you know, and there may be something else in the title of the namespace. There may be other things that relate to individuals. And in the United States in the university settings, that's problematic because people, you're not supposed to even identify what classes that people are taking outside of the university and whatnot. There's a lot of issues about it. Maybe one thing we could do, Mike, is who is the point person in that repo for a must-gather? Like, they must have thought about this pretty a lot when they were writing it up. So, maybe what we could do is have you reach out to them and see what their game plan was. I'm probably bringing all behind the red hat firewall. Right, right. Because we're doing it through Bugzilla and we're bringing these things internal. You know, that's a great idea, though, Diane. I'll figure out who that is and do some reaching out and just see if I can do a little more digging just to see what they were thinking. And maybe they've got some ideas around this that they're already kind of working on. Yeah, who knows? A new tech for encrypting all of it and storing it somewhere safely, blah, blah, blah. Or it's an area for innovation. So, I wanted to... I just want one more chance to bring out the term homomorphic encryption. That's just my main goal here. All right, all right, use the big words. Yes, thank you. That's all good. Thank you. That makes me feel a lot better. Now I don't have to use it again until 2022, so... All right. Come on. Great, you bone. All right, so I just wanted... I know there's been a lot of chat while you were talking about must-gather because as soon as anyone says the word arm or Raspberry Pi in any tech meeting, everything blows up in chat. So I wanted to ask Jeremy Linton to introduce himself because he's new to the meeting and he's from the arm world and maybe talk a little bit about what's been going on in the chat there around meeting some place to test arm and that. So Jeremy, can I get you to turn your microphone on if it's not... Yes, I just did. I guess a quick intro. I guess I'm Jeremy Linton. I'm actually one of the Red Hat partner engineers. I've been... I've been called Colonel... years now. We don't see you. Yeah, I guess you can have camera too, but I've... There we go. Yeah, so... Yeah, I guess I went to a flock a couple of years ago. I'm sort of somewhat active on the side as well. Upstream Colonel, Keanu Corp, that kind of stuff. And I guess the chat I was talking about, the PyFirm or Task Force, which I can get a page. Which is there's a UEFI firmware for the Raspberry Pi that allows it to boot a wide range of operating systems, Windows, you know, CentOS to a certain... There aren't drivers for some of the hardware yet in CentOS because it's not a supported REL target. But yeah, it works fairly well when in Fedora, it's getting better every day. Like right now I have some PCI patches posted that are hopefully gonna merge. It'll give us native PCI support instead of having to use the USB XHCI as a platform device. There's a long list of things. Anyway, so I guess personally, with the up and coming open shift release for Red Hat, I'm sort of getting in front of a bit of that and seeing if I can do a little bit of contribution or testing with OKD on maybe that platform, Graviton, whatever seems to be desirable at the moment. I have a pretty wide range of SPSA platforms I can test it on. And a couple of years ago, I actually spun up OpenShift for, I guess, OKD from source and sort of got it working, but haven't really touched it since. Well, I think if, and I was just talking yesterday with the folks who do developer.com, Red Hat developer.com, and they've been asking me for very specific things, developer-focused topics. So you're my lead-in to this conversation about, and I know ARM and Raspberry Pi are really hot topics for folks. So even if you just wrote up a blog, they would, and I would funnel into them about your adventures in deploying OKD on ARM. That would be a great spot to start. The other thing, and Charo's not here, I'm gonna try and get Charo to do one on using the code ready container. I think we have videos of it, but we don't have any write up how tos or anything that for developer.com, specifically with OKD. There was a question, Joseph, you put in about the tool chain for code ready containers. So I'll see if I can track down Charo to do that. And then Craig Robinson, who I don't think is on the call today, did a nice using his deploying OKD for I think was 4.4 on his home lab. So the more content we can get that is specifically like things that developers are just gonna eat up like candy, which is what I think of Raspberry Pi as candy. I love them. So there's too many of them in my house right now. But I think that would be great. If you have the platforms to test on, we definitely would help you out doing that. And a recipe, what we're trying to do is create sort of a cookbook of recipes on how to do that. And I'm trying to drive adoption of OKD because for me, and this is just my philosophy of OKD allows us to do a great lot of collaboration with the Fedora CoreOS community. And like we saw with 4.6, when things happen in Fedora CoreOS that impact us, we're basically the early warning system for a lot of things, Fedora CoreOS, which will eventually end up in Rail CoreOS, which will, so it's really great for us to be able to do that testing on ARM and any other places as well. So thumbs up, thanks for joining us. That's great. And if anybody wants to help him with that, I can see chatter going on. Yeah, a short recap of what we talked about in the chat. So from OKD's perspective, we don't run on ARM yet because we don't build ARM 64 images. That can be fixed, relatively simple, but the problem is cadence. We probably would be able to build just released images because that would take time and precious resources. And OKD can look for that. Another problem is we are targeting 16 gigabytes masters and I don't think Raspberry Pi's can allow that, but that is being addressed on open shoot scale because they are targeting three masters on eight gigabytes. So we will gather for three once we revise, once we, this lands on four, seven, for instance. And another problem was, yeah, I'm not sure how good Fedora CoreOS itself supports Raspberry Pi's, for instance, and so on, but we probably can't. The answer to that is not very... Yeah. Okay. So I don't think there's a lot of work in that. There are already ARM builds from the community, from the Typhoon, I think it is. Yeah. This Kubernetes distribution that also runs on Fedora CoreOS and we're actively working on actually building official ARM images as well. Right. And Fedora CoreOS already does produce ARM64 images, AR64 images, whatever you want to call them now. And the main issue right now is that the Raspberry Pi 4 series, like between the eight gigabyte model, which is the only reasonable model to actually use for OKD, has a sufficiently different hardware base and device tree and components and drivers and stuff that none of it has been upstreamed into the Linux kernel. So Fedora has basically no functional support for the eight gigabyte model. The four gigabyte model mostly works. The eight gigabyte model does not work. And the Raspberry Pi foundation doesn't care about upstreaming stuff. So that turns into whole another set of messes about getting the Raspberry Pi 4 to work correctly in the four series in general. Right. And this is why I was pointing out the Pi firmware task force because it's turning it into an ACPI platform. There's no device tree. I mean, you can run a device tree with that thing, but it's intended to be an ACPI platform. Yeah. But that doesn't fix the issue of like legitimately the hardware that's on there is different and we have no drivers for it. Like for example, the CPU, the SOC is actually different which means it has a different memory controller. It has a different GPU chip on board. It has different memory partitioning, all those sorts of things. And while some of it is, it seems to be kind of okay with the Raspberry Pi 4 UE5 firmware. Like for example, there was no way for anything but Raspberry Pi OS to use the full eight gigs of RAM with the UE5 firmware. It's relatively straightforward to get everything to use the full eight gigs of RAM on 64 bit. Yes, it only works on 32 bit. Don't ask me. I have no idea why that restriction exists that way. It makes no sense. So folks, I just want to recognize that it is the top of the hour and we do end normally. And I'm sure everybody's got a stacked email or whatever to do that. So I'm happy to let it, Jeremy, if you want to drive a little conversation about this, we can also do like we did with the vSphere triage last month use a Tuesday, not next Tuesday because I have a meeting, but any of the non-week things here to just have a session on testing. And once you've done a little experimenting, come back and host a triage. Because it's obviously something people are interested in seeing if we can get working. So that would make me happy. And I know Amy Merrick is on the call and she's being quiet. So happy 2021. And I was going to ask her to give me a hand reviewing the documentation and the contribution, contributor ladder stuff that's on our, in the OKD site so that we're a little bit better adhering to best practices and stuff like that. I think we can use that. And I know Amy, that's something you've done very nicely for a number of other communities. So I'll try and hit you up in Slack or somewhere. Just hit me up. And we'll get that just to take a, maybe do an audit because from my perspective, we've got stuff on OKD.io. We've got a website that's separate from the GitHub. We've got the GitHub documentation. So just really kind of one of my goals for 2021 is to get our documentation up to snuff, our cookbook working effectively. And it may mean quite a big things. And I was hoping I could get Amy and Josh Berkus who's also from the Ospo group to take a look at that as an initiative for 2021. So that's what I had for today. So I didn't mean to cut you off there, Neil, but people will start dropping off. And I will keep this on the menu for the two weeks from now, Jeremy, if you wanna come back with whatever thoughts you have. And then if we need to set up the alternate week to do stuff on the Raspberry Pi ARM stuff, I'm sure there will be people who will join you in this endeavor, including myself. So that would be great. And if there's anyone else on the call who's agenda, and we had folks say from the GitHub actions team that they were gonna come and from one other group from the power folks, we're gonna talk about something. And but I don't see any of them, I didn't see any of them in the, in the other people. I'm here. Michael Turk. I'll take them, maybe. Yeah. Hey. Hey, Mike. So next, I know you keep coming and I didn't see you in the chat. Oh, it's fine, I'm sorry. So we'll, next time, we'll also give you some airtime if you wanna talk about the PPC-64LE stuff because that might help spur that as well. Yeah, and I mean, we can frame it also as a all-tarch in general kind of thing. So maybe some of this ARM stuff can also be helpful there as well. Yeah. So I think that would be great. And maybe that's just an all-tarch working, sub-working group set of conversations too. Because for me, 2021 is OKD everywhere. So let's see if we can make that happen and get the OKD operators going and just drive adoption. I wanted to give a shout out quickly to Jamie. Thank you for the FAQ updates and a reminder for folks that we do that and I'll send the date, timeline and session. We have a slot at Dev Comp in February to do Birds of the Feather. So I will send that time out and then we'll figure out how to run that, that group meeting too once I have that time. But that's what I have for today. And I just wanted to wish everybody happy and healthy 2021. And thank you for your participation. It really makes this whole thing wonderful. And I'm just really thrilled to have all of you here. So thanks and have a great week and we'll see you in two weeks time. See y'all. Thank you Diane. Yeah, thanks to you too Diane. Bye. Bye bye. Bye.