 Welcome everybody. I thought this year we would try and do a little bar camp type of thing. So we kind of self organize what topics we wanted to talk about and get some feedback from the community and some more information and ideas and so forth. If you look at the pin hack indeed document there, it kind of has information on on how we want to do this. So for the next 15 minutes we'll probably do voting and organization stuff. There's right now five topics in there. And we've kind of so far we voted for six 15 minute topics. So we actually have another slot there that's available for for something else without even voting for things. So I'd love to see some more topics. I did also put some topics suggested topics in the flock tickets, which I can dig up. I mean I can think of tons of them. We have no lack of things to talk about that's for sure. Yep. Thank you Maxwell. That's it. Anybody has a topic that they'd like to go over. Please feel free to add it to the hack indeed or if you have permissions issues or whatever just let us know and we'll get it in there. Excellent. Also, I was kind of thinking, if we're more discussing something if you have just a quick thing to add you can go ahead and add it in the chat. But if you have something longer to say or you want to be more involved in the discussion or whatever we can certainly add more people and have no problem about as many people the more the merrier, as long as we make sure that we give everybody equal time to to chime in. Yeah, it would be good to see faces of people if they do want to say some stuff. Nice to kind of, you don't get that in the virtual conferences as much so it is good to see if we can. Exactly. Yep. So if you, yeah, if you want to see a particular talk, you can put an X vote next to it. But like I said, we right now we, we have more more talks than slots. Hello, everyone. Welcome. I voted for the future. That should be fun one. Yeah. I've got some ideas, but I'm very curious to see what everybody else thinks. Yeah, and feel free to actually put anything into questions if you have any questions for Fedora in front of all the lunch. Should we share the hack and the document here too, or I can share it. Okay. Sorry, I think you had a question. No, I just wanted to advocate for the topic more because I often feel like when we talk infrastructure we're focusing on like problems and current workflows and like how to support people better. But I wish we also think on is it from a position of like we're not just serving the infrastructure we also develop in it and we should have our own ideas where we want to go. No, not just as a response to the external requests with this. That's why I'm interested in hearing more about that. Yeah, we're very in general we're very reactive instead of proactive. And I think it being more proactive would really, really help a lot of things probably reduce the need to be so reactive. So we got just a few more minutes of voting and adding. For now it seems that the onboarding one is. Is the winning. Let me see I can, I'm afraid. Yeah. Yeah, it has the most x. One other thing I was going to try and do is if we could get somebody to like be a secretary for each, each little section and kind of as we're going right notes and the hack and be that would be very helpful. I can do that. Well, I was hoping to get a different person for each section so that like somebody leading doesn't have to do the notes. We can decide for each of the sections. Or if anybody wants to add them at themselves to hack and be feel free to do it. I just added a general Q&A one there, but hopefully someone will add a more interesting topic. It still seems that the winning one is the onboarding improvement ideas. Is it possible to open up to I can be so people without I can be a candidate. It should be. I have only hack and the account and nothing else. I mean there's not a hack and the account that's any sign on. It allows Google sign on it allows others signing options. So you don't have to specifically register for hack and dealer. Okay. Just to make sure we can leave everyone contribute. So we have that we'll do introductions in this time as well. So do we want to introduce ourselves for anyone who doesn't know us? It's supposed to make sense. I'll go first. So my name is Marco Brian based in Ireland. I started on Fedora about two and a half years ago, I'd say. I'm a red hat employee as well. So I work full time on Fedora infrastructure and CentOS infrastructure as well. And I went fairly closely with Kevin. I found out was wisdom also. It's a sad man. I mean this is admin main most of my experiences is admin very development experience. Yeah, it's about it. So say Kevin you want to go next. Sure. I'm Kevin Finzi. I've been working for red hat for about 10 years now. I've been working on Fedora infrastructure and then probably the 10 years before that, I was a community member working on Fedora infrastructure. So I've been around a really long time. But I like to think that I'm not too set in my ways. I like to try any things if we think of something that is a good idea or seems like it would be a good idea. I'm happy to try it. And I'm a miracle on IRC and matrix. If you frequent those places. I do a lot with release engineering also just try and keep everything rolling along. Michael. Okay, so my name is Michael Connichney. I'm part of community platform engineering team and in front of him as well. I work as a developer on some apps. So recently I got, I got my access to CIS admin main. So now I can actually mess up with anything. And I'm working as a HL practitioner for in front of lunch team in Fedora. So this is something I do as well. And if anybody is wondering about the head, it's kind of something that I'm doing for the release monitoring.org. I'm working as a mage on release monitoring.org. And yeah, this is if you read any blog post from about the release monitoring.org, you can see why I have it. It's partially Fedora because it has the blue and the fight. So, but yeah, it's more a magical hat. Okay, I think this is all about me. Would you like to do a quick bio, even though you're not an infrastructure person that you're here? Yeah, I kind of decided to jump in. I'm actually a CI engineer for Fedora rail and center stream. We've focused on the realm mostly, but also center stream. So I'm kind of representing the users of the infrastructure on from from that side with heavy users of we built and then the release engineering infrastructure and looking for new ideas. Yeah. Okay. All right. So we dive into the first most of the topic, which I guess is on board. Yep. Sounds good. Can somebody take notes? Okay. Mark, do you want to do it or? Yeah, I did it first. Okay. So let me really quickly just kind of give folks a little bit of information about how our onboarding works now or how it's expected to work now. Maybe I'll give a few problems that we have with it now and then let other folks jump in to see, you know, if they have ideas on how to improve it or re revamp it completely. So right now, we onboard folks, usually in our meetings or on our mailing list, they'll come in and say, Hey, we want to help work on infrastructure stuff. At that point, we'll usually add them to our apprentice group. And that gives them access to log into a whole bunch of machines and look at things. It gives them just read only ability to get to host they don't have sudo they don't have, you know, anything like that that they can actually log in they can look at stuff they can see what processes are running and how things are put together. We have some documentation that kind of goes over the process. And I think that's one of the places where we're really bad. Our documentation is not very good. It used to be on the wiki and it kind of got moved to docs dot to door project or but it really didn't get reworked. So it's kind of fragmented. After somebody joins as an apprentice, usually they will ask, you know, what, what can they do and we'll try and give them easy fixed tickets, which are tickets that we've marked as being a newcomer friendly type of thing. Unfortunately, often we don't have very many of those. Most things require some kind of access or permissions. But occasionally we do have things that, you know, write a script or submit a PR for this change that kind of stuff. And I think that part of the process is pretty well known but then we get into where I think we really fall down, which is there's kind of a gap there between you're an apprentice and you're fixing things and you're submitting PRs. And then at some nebulous point, somebody says, Oh, you've been doing a lot of stuff. Let's add you to some more groups and let you have more access to fix more things and you can work on this and this and this. That gap there is is often where we run into problems because new people come in they contribute they do a PR or two. And then they just sort of wander off because there's nothing else for them to do or we haven't like told them where to go. Also, we've kind of avoided direct mentoring like one on one, you know, you are assigned to this mentor, you should talk to this mentor because we just don't have that many people we have relatively few people working on things. And if we had say an influx of 10 people, you know, how do you decide who gets a mentor or not, or do you just sign all 10 those people to one mentor and then that mentor is completely, you know, overwhelmed. That's kind of the process how it is now and I've already pointed out some problems, but maybe there's a better way to do this maybe there's a, maybe there's a clever way we can do this. One of the ideas that was thrown out a while back was to try and do some more video type stuff like training have a training process. You know where we actually new person comes in and we say hey can you go through these training, you know this training sequence and learn how to do a PR how to do this how to do that. But, you know, again that requires somebody to make those things and keep them up to date, and so forth. So that's kind of where things are now I will shut up now and hopefully other folks will dive in we can add people to the video chat if you want to, to talk at length or just add stuff to the chat so Do we want to address the questions people asked there while you were talking. Sure. It's directly related to what you were saying so. And the first one from book where does adoption of open shift changes, the onboarding practices cannot help. More info is code approach, log slash metrics. So that's about four questions in one really. Oh, all right, I didn't realize that. And yeah, so does open shift change the onboarding practices. Not really small but I suppose you can get access depending. As Kevin said it mentioned like it is it's very up in the air. We don't have strict defined when someone can access or when they're not are listed challenges they need to meet before they get it. But with open shift, it's it's all run through Ansible so you can raise PR is to create what you need to be reviewed by people who do have access and they'll be able to run it. Which kind of links to something Kevin said about the gender direct mentoring, although we don't assign mentors to people if you have a specific problem. And you might need help you can reach out to one of us and we'll try to help you if we can. And it's not quite a mentorship level but it's it's one on one helping per problem. It's a that's easier to manage because I mean it's only a sediment for time. And we know exactly what we're going to be doing. So that can be a thing to do with open shift. You can ask someone who doesn't access to help you to roll it out and give you access to the namespace in the project. That's a more fine grained access. So it's probably easier to get. So as more infras code approach, we generally use Ansible for everything anyway. So infras code approach is kind of well indoctrinated into the team. Easier to access logs and metrics. The FI apprentice gives you access to the log servers as for metrics. Nagios is open read only to anyone. Although that doesn't really give you full metrics and metrics is probably something we're not great at to be honest. Yeah, I don't know if that answers your questions. If you've any more based off of that or if anyone else wants to jump in with something at least. Well, the open shift metrics are actually pretty interesting from time to time and we don't look at those very often. But I don't know what the access is on those if they're available or not, I'll have to check that. Yeah, I don't know either. Neil, we talked about Prometheus for some of our apps a while back. So I, a lot of them I would like to just get into open shift because it's kind of easier to deal with at that point. But yeah, cool. Yeah, I know anyone else. I mean, he's a little bit push. He's not here. I even found out that we have in Fedora in a fraud. Some repository that is that has the flask Prometheus integration. I wasn't aware that we have anything like this. So probably this could be a good to actually document somewhere for apps. So the metrics and statistics in this sense, we're talking about asking new folks to look at that and try and, you know, come up with improvements or notice patterns or something like that. Yeah, I was, I was writing this question when you were talking about the SSH access that you give it to people so they can actually go go and see themselves what's going on. And I was thinking if like different ways of deployment, like reduce the need to give people SSH access and do it in a more way like through the UI, you can go and search through the logs already without SSH on the only actual server. Right. Yep. Maybe we'll switch to the other notes I added here. So I was wondering for the titles of this work. So I often meet people who like sit on the channel and say, how do I become a DevOps person? And they look, where can I get a course on DevOps work? And I'm like, I actually know where you can get a hand on experience, but they are not looking for only just a hand on experience. They also want that title, which would, they would use as a reference in, in their like CV and stuff like that. So I was thinking, maybe like adding these titles, which people are looking for as a job skill we can sell when we go into a job interview. Maybe it will also increase the adoption of that. I'm interested in the onboarding to this infrastructure work. That's an interesting idea. And actually having those titles might be helpful to, you know, show progress for somebody between being an apprentice and being a, you know, like full access type of person. So yeah, that's a good idea. I mean, I regularly meet people who actually want to pay for someone to give them that certificate that we are DevOps people, you know, that they can use for work. So if we can provide something like this for free, but like in exchange for that work, maybe it will lead to more engaging experience. Yeah, for us. So you mentioned the gap and you in the sense of Kevin, the gap between people being apprentices and people being full members. And it sounds like the solution is to allow people to assume advanced roles, even though it's not possible to verify that they are fully ready and that they will not do anything bad. I mean, in my experience, people are motivated to do things well, and they will do will not do harmful things. As long as they know what is outside of the scope, which is allowed for them. So I think that like a soft agreement that, okay, you in principle, you can touch almost everything, but you work on maybe on this service. And this is this for now, this is your part. This should be enough. Looking from the outside, even though I'm part of Fedora, but I'm not not part of of this engineering at all. It is very hard to get insight into what is happening inside of recent engineering and to get access. I mean, people have been asking for access for years and for some parts, and it took like ages. And I think that this is, I mean, this could be just fixed by assuming that people do the right thing and giving them access when they ask for it. Yeah, some of that is historical, I think, because at least in infrastructure, the way the permission system was designed is that there's a group for each app. So like if you're working on app X, you would be given system in X to work with that particular application. So like things were kept very separated by application. But now, you know, things are an open shift and things are are much more consolidated. And we don't really have people who only work on or we don't have very many people who only work on one specific application. Most people work on, you know, lots of stuff. So, yeah. So we're coming to the end of this 15 minute block. If anybody wants to add anything else there or we have one question. The final message. Go ahead. We have one question in and the question and answer block. How can community people help make relinch infrastructure easier without being part of the team? That is a question I was hoping someone would ask, but was also fearing someone would ask. It's kind of the golden question really that definitely doesn't have a perfect answer. One thing we definitely always need is help with documentation. Even if it's a case of joining someone while they're doing something to help them document after because as you know, when you're doing a task yourself and you write down, you leave out things that are assumed in your head but might necessarily be assumed as someone else. So that's one kind of easy bite to take. Another is to just try and learn. If you see a ticket that's of interest, even if you don't think you'd be able to do it, ask questions about it. Someone's going to have to do it, so it's better if you ask questions. We can answer those questions. Maybe over time you'll gain knowledge slash access to be able to do it yourself. We want to share knowledge where we can. It's just sometimes we don't know what to talk about. You'll often see every week we do it or every second we could do a talk on our weekly meeting, but we don't know what topics to discuss. So that's another way you can tell us what you want to know. And we'd be happy to share if we can. If anyone else wants to jump in there or that answers your question as well. All right, we're at the 15 minute mark. Shall we jump to the next topic? Yeah, the next most wanted is five year plans. Cool. So I won't pontificate too much because I like I said I wanted to really hear other people's thoughts about this but we're generally very reactive. Like somebody comes to us and says we want to do this. Can you help us or you know, we react to problems that occur or whatnot. And it would be very in our benefit to be more reactive or more proactive. And it's really hard to predict. I mean, you don't know what the future is going to be like. In five years, things could be very different from the way they are today. I think there's going to be some general trends that are going to continue. Like moving stuff to cloud. Open shift, automating, GitOps, all these things are going to be coming along. But are there other things that we can do to prepare for that future? Can we come up with things that we should be thinking about now or working on now that will help us in a year or two years or three years? And again, it's really, really hard to say what those things might be. It's crystal ball predicting the future type of thing. But like I said, there are trends and there are things that we can think about. There are things that we got in early on and it was greatly to our benefit. Like Fedora infrastructure was one of the first big folks using puppet and then one of the first folks using Ansible. And, you know, we really kind of pioneered a lot of stuff in that area because we were there ready to go. So I'll shut up now and let other folks dive in. But I'd really like to hear where you think will be in five years. What we should work on now that will help us in the future. That kind of thing. So I jumped in Kevin because I think this one caught my attention pretty significantly. What do you think about writing documents? Like having effectively a series of things that look like press releases to identify what we think it should look like in the future. You know, like Fedora infrastructure, you know, moves X, whatever workload to, you know, whatever location. And then have that as sort of a guiding light. That might be that might be interesting. I know that that some folks have in our upper organization have started doing a three year plan where they try and kind of predict the future and lay out some stuff. And that could be a mechanism for doing that. Yeah. Yeah. So the reason I think that's an interesting idea is that trying to write a five year design document is like an outrageous task. Right. But trying to write, you know, but writing something that is a vision document, I think that's, you know, and trying to keep it very short. And then leveraging, you know, like a frequently asked questions kind of response model gives you something that you can granularly modify over the five years that it's supposed to be in, you know, in action. So, you know, that's something that I think I think goes a long way in terms of creating a vision plan that people who are apprentices can get behind and then have kind of a, you know, a business practice that they know is going to drive whatever it is that they need in terms of their design and initiatives. I like the sound of that. One thing I would say is five years is a very long time in tech. It is, it is, but it, yeah, it's something that we I really feel like it's something we can do in terms of vision, but it's I don't think that you could storyboard five years of activity. Right. I mean, just that be outrageous. So I think keeping it short and identifying it as being like a really nice to have goal, you know, to keep us keep a cohesive vision is great. But then there'll be lots and lots of smaller change proposals that we'd expect to just be part of the same, you know, like fit into one of those questions, the frequently asked questions kind of style. All right, I have to say stuff now. All right. Well, this is going to hurt a little but I think thinking about a five year plan is probably a bad idea. Observing how we've, you know, done even two or three year plans or even five year plans within the wider Fedora context. That hasn't worked very well. Mostly because we as a community can really only conceptualize like all the planning that we managed to do within maybe six months is already hard enough as it is going out to a year already gets slightly fuzzy. When we get to two years or further, like those plans tend to like not survive first contact with anything. But yeah, like, like we are saying in the chat a five year goal might make more sense, rather than strictly going out and trying to do a plan because we're all going to just wind up with a whole heap of disappointment. If we do a five year plan, but like, maybe we could think about a structure of things that we want to we want to try to accomplish within the next year and then have a long term midterm and long term goal like a three year goal and a five year goal and maybe more, more planning for like one year and kind of go from there because I think the further out you go the harder it is actually stick to it. Things just change too much for that to be reasonably workable. Yeah, I agree with that. I think that's pretty similar to what they were saying. It's, there's no point in saying something down and still that you want to five years because realistically it'll never happen. Sorry, Alexander. I guess I was determined with the same thing Leonardo said. So we shouldn't be setting five years as a goal to achieve all the stuff where it's more like a scale of the things which we're talking about. We can set like a couple of goals of five year size and then move in that direction and it doesn't mean that we promise we will be delivering that in five years. It doesn't mean we actually have direct plan to it, but it's a direction of like the scope of the work which we want to achieve and we can move in that direction as fast as we can, right? Yeah, like documentation over Wiki is something that you can set out as a five year plan. That migration strategy would be somewhere you'd go, but it's not necessarily telling you exactly how you're going to do it or what Wiki you're going to use. I'm sorry, what documentation model you're going to use. All those things have to happen in very granular changes. That's a good example, but can you guys think of any other ones? Because I'm thinking of lots of things that we want to do that are reactive again. Like we should move everything that's REL7 or REL8 to REL9, but that's reactive to REL9 being out. I don't know if I consider that reactive. So we could generalize this to be a much more proactive thing. So instead of specifying that we're talking about REL9, let's say the more proactive thing is let's aim for making it possible for us to redeploy our infrastructure on the newest targeted platform on a regular basis. Yeah, I'd like to do it on a rolling basis. I'm sorry too, Neil. No, that's good. But I think that that's a super exciting idea. But just to say, instead of saying some one version, just say, OK, we'll be able to roll to latest every six months. Right, or every, you know, and that's our goal every year. That doesn't necessarily, yeah, that's right. And it certainly doesn't, for it to be a five year goal, certainly doesn't have to be something we think we can achieve today. Yeah, like, and I consider if we generalize that idea, like that makes it more proactive because we're talking more about making the mechanics of doing those upgrades easier and more regular and straightforward, as opposed to like, let's say we got to deadline this, we got to kill off this REL7 box, or God forbid this REL6 box that's still floating around, screwing up our souls. And making it so that we think about, all right, with our infrastructure that we have today, you know, how do we actually get this running? How do we get this deployed? How do we redeploy this? How do we advance it? How do we all these other things? Thinking about it from that perspective changes a reactive action to a proactive one. At least in my view, that's how my team at Datto has thought about things. Yeah. And it gives everyone a schedule and it makes it makes the, you know, makes the process more blame, you know, the thing to blame rather than any people, you know, and that way, you know, we can, we can look at it and say, OK, so we're rolling in six months. Everybody's working towards that goal. Where, you know, we, we have, you know, like, we have a security exception, a security exception looks like this. And, you know, over these five years, this is, you know, our goal is to make that, you know, our ability to manage a zero day exploit of some sort, you know, within such and such an SLA. And then that way, we're not, we're not spending our time trying to tell everybody what tool they're going to use. Although, I mean, I do also think that they're, you know, 12 factor app kind of goals that we would have to write. So I know that's the, you know, anyway. Just a few more minutes. I want to ask a question. If you consider, like, more infrastructure focus or more Fedora release engineering focus, because there are like two directions of development here. One is developing in terms of generic infrastructure continuous everfane. And the other plans, which we can consider is like, are we going to still build Fedora the way we build it in five years? What are the changes? How, how we want to maybe change the Fedora compose to stop being a compose, which is honestly the goal I would love to have. And does it fit into this area? Or do you see it's like some different team needs to work on that? So yeah, I think this I was more thinking of the infrastructure side. Release engineering, I think it has to work with the other parts of the project to decide those things, right? I mean, release engineering is the one who has to implement them. But, you know, like the council and develop list and Vesco and all those folks need to say, here's what we want to do. So I think that's a group goal, maybe our group task. But it falls into the, to the same trap of being reactive, you know, like here's another deliverable will deliver it just the same way we deliver everything else. Etc. Yeah, this is where I have a feeling that, oh, like, if we wait for Fedora console to decide such things, this will never happen because Fedora console doesn't have expertise on that level. It doesn't also have understanding of what's possible. And I think the console in this sense also sits and waits in more reactive state because it says that waits for feedback from actual engineers, which would come with request what we need from Fedora project and Fedora build process. So I think we were lacking the leadership in this case and Relang team can actually be that leadership. It doesn't mean they can do it alone. It's of course it's like cross distribution effort, but this thought leadership in this area, I feel like it's missing completely. I think you're right. And I think one of the reasons for that is that right now, paid to be on release engineering is Tomash. And that's it. I'm not, I actually do release engineering in my spare time. And so, you know, there's, there's a kind of a lack of manpower there, I think, but I think this is something if you wanted to change that it would be, you know, interested parties getting together and putting together a proposal for it, essentially, and, and driving that forward. Okay. So the next topic we have is how to handle Fedora in Fratech depth. And same amount of volts is based to improve communication. So I'm, I can hear communication more important than the deck. Okay. I will write the notes because Mark is the one who actually added it. Also, just to note, at the top of the hour, we're going to take a little break and let everybody go get coffee and walk around for a few. So that would be good. Yeah, so with this one, it's a case of we have multiple ways to talk to people. And it's not always clear, like if we announce an outreach or something the current way is we raise an issue on the tracker and we mail the mailing list. But to be honest, we've no feedback as to who's seeing this. Oh, and we use your status, which should be where people check hopefully. But we don't get much feedback as to who's seeing this other things. If we want to ask a question, we don't know whether we're not reaching the right people or whether people don't feel engaged. It's hard to really know. This course is something as a team. We don't really use much. Maybe we should start. It's kind of just wanting to know where people are and where they want to talk. Matrix RC mailing list discourse, what should we use? Should we use all of them? Should we only use one of them? The more you have, the harder it is to track. So my preference can be towards using one to two of them for main communication, but which ones that should be, I don't know. It's kind of an open question for people here and a tough one to answer. So, I mean, at least for me, I look, I see the mailing list posts and I see the, I see the ticket items, but that's because I'm subscribed to the relinch ticket tracker. I think most people are not subscribed to the ticket tracker. The mailing list post is useful. There was something I suggested in another session earlier that, you know, for the change announcements that, you know, people say like non technical people were largely on discussion rather than on the mailing lists. Maybe we should have those mirrored into discussion as non repliable topics like they're basically locked topics that auto that are auto mirrored from the devil announced or I do we have a relinch announce. I don't know. But whatever announced email list we have maybe they should be configured to mirror into discourse without letting people reply so that they can see that those are happening as announcements. For real time chat. I personally rely entirely on matrix now going back to IRC is painful and sucky. I like using matrix for that sort of thing and I don't know how the rest of the community have been doing it but I've seen a fair bit of adoption at least in the matrix rooms that I'm in where there's a lot of people that are coming in from matrix rather than from the IRC bridge and the proportion of signal to noise for IRC people has gone way down route and it's more spam than not from the IRC side. That sort of thing I don't know if that drives with other people's experiences but that's that's pretty much what I've been seeing. I've actually been seeing the very most spam from telegram bridges. The second most spam from matrix and a few from IRC but the rooms I mean telegram bridges so that's probably why I don't see those. Yeah that's telegram is horrible. Yeah. I'll just leave it at that. We should just all move to slack. Oh don't say that. No. I thought that I'd get that reaction alright. Oh yeah one thing I left off which I just remembered was blogs. They're also something we use for communications for what we're doing. I don't know if people read those or not. Funnily enough I think Neil no offense but probably the worst person that's this question because he will already seem to be kind of everywhere anyway. So we can get some larger reach in the community of people who wouldn't be as active to see because one of the things I feel that this is there's a lot of people who see everything. The core group that are always around and if we could reach them more that would be better. Yeah if there's a folks in the chat who have opinions on this where do you see infra and relinch stuff or do you and where would you look if you were looking for it. So I've got a bigger opinion on this one to I guess than I thought I did. I like the idea of having just generally a fairly large scope on the reporting. So like having having the blog post is something that is the foundation and then extracting the information into the other locations seems to me like a much better way of handling that because then you can have like specific messaging that you think covers the impacted you know impacted groups and what you want them to do and you have a location for them to stay tuned and a place to update with the with the information. And then you can sort of centrally link anything that would be leveraged for the communication right like bugzilla or you know whatever whatever look you know the issue for the issue or whatever can just be kind of linked back to that that central location where you have all of the content that you might want to be associated with the with the outage or issue. And then one more thing just generally being able to collect all of that content you know an adjacent format would make life hugely easy to to give everybody exactly what they want. Yeah, that's a good idea. I'll just throw in here. A quick thought about discourse versus mailing lists. I'm an adaptable person I can use whatever. But the thing that I think that bothers me about discourse is that it's, it's a pole model so you have to remember to go to the site and read it. And it's presented in the way that that site presents it to you. Whereas email is a pole model or a pole model where you have automated processes that deal with it. And all of your sites all of your mailing lists all of your stuff comes through your filtering. And it's presented locally the way you want to present it, but or Thunderbird or evolution or whatever you want to, however you want to see it. And somebody made a very good point in an LWN conversation about this. All the old timers all of the folks who have been in open source for many years have a elaborate system of email filters and displaying you know where things are filtered to and you know when to go through your mail and so forth. And people who are new to open source don't have that they don't have that set up and they are much more comfortable with a place where they can go and just read it and it keeps track of things. They don't seem to need to set up these elaborate filters and everything. So it's a real dichotomy for the older people. Email is a lot better for the newer people discourse is a lot better so it's difficult to bridge that gap. I don't know how we do. I mean, just an opinion. Yeah, I find with filters. I've been setting them up and changing them constantly, but it just leads to missed mail. Well, I mean, that's one of the reasons I don't use filter very much. Yeah, tremendously differently. I mean, I relied on email filters when email was text, but I spend a lot of time now reading HTML based email and it is not as easy to filter. Yeah, and I'm using the zero inbox rule. So I'm just have tons of filters actually so and plenty of folders to actually use. So I think that this is like a secondary issue because nowadays everything is bridged to everything else and it's kind of easy to forward stuff. The question is what you want to actually display to people. And I think that a human needs to write text in a way that is accessible to people. And this is the big problem. And once you have it, I mean, you can fling it to any channel you want. And I feel that communication is missing at this level that we need somebody who actually filter from what happened during the last week or the last month and presented in a way that is legible to people who are not intimately involved. This is something that needs to be done. Yeah, and that's that takes a lot of time and effort. The complete example of that or the perfect example that I'm thinking of is Mo Duffy's summaries when they were redesigning the websites. She would do these blog posts that had like mockups explaining the thought process how everything went. They were huge and long, but they were really, really well done and they were really engaging. And everybody read them and I'm sure that took her tons of time to put those together, but it helped a lot. People really understood the process a lot better. But I mean, even at a lower level, I mean, just, okay, this is like a very extreme case, but a few paragraphs or two paragraphs of text. I mean, it's not shouldn't take that much to write. It's most, I mean, I think that the problem is that this needs to be done by somebody who is involved in many things, nothing to understand what's going on and have an overview. It's hard to do from the outside and it's hard to do like for an intern, I think. So one of the things that I was thinking is that whatever location it is, it should kind of represent a collection of those, of that kind of separation of duties, right? So one, there's a group of people who are working currently and they're going through a thought process and moving through things. Two, there's a status, right? There's simply this is where we are and where we're trying to get, right? And then after that, there's a review of, you know, there's kind of a review of that correction, right, that goes through. So you do the post mortem and being able to have all of those in a single location seems to me, you know, with like one basic identifier seems like a huge, would be a huge benefit. And then however you want to syndicate it, right? That is up to everybody else's tool, right? That's how the new hotness works. Somebody here, you know, puts a lot of time and effort into. Yeah, the new hotness is actually something I'm working on. But I'm not the one who actually wrote it. And about the summaries, we are sending every week the CP weekly summary, which is the work that the CP did part of it is in a friend launch. And we are doing each quarter the quarterly reports. So we are trying to share this as well. These are only shared on the community blog, but we are doing it each quarter. All right, so that's the end of this time block. If anyone wants to throw any final thoughts in on this. And now we have scheduled to take a quick break. So why don't we stop here and everybody go get coffee and walk around and I'll go feed cats before they attack me and we'll be back here. Did I say 10 after 15 after? But you can get back before that and chit chat and add Q&A stuff for the Q&A talk, etc. All right, shall we get back into it? What's our next topic? Okay, your next topic or most voted topic was how to handle Fedora infrastructure, which will be on me. So I can click notes. Okay. Okay, so just make it a bit bigger. Okay, so currently we have a lot of tech depths in in front launch team. We are trying to address this. But there is most of it is still growing and more time we actually don't work on it, it's growing much more. I tried to create a list of it and that I'm sharing right now. And I want to discuss here how do you think is the best way to approach this as in front range or do you think there is something that we missed in the tech depth as well. So I will just recapitulate what is on the list. Can you post that link in the chat? I don't think this is okay. It's not visible because I don't think this is actually reachable by anyone in the outside redhead. Let me look at if I can share the link outside. But I don't think it's possible to actually do it. I can just do general aces to redhead only. The internal redhead, so. Or maybe we could export it to another one or something just so folks can look at it. Yeah, okay. I can actually save it a copy or something. Okay, I'm just not sure where to. I can create a copy but I don't have any other Google account and redhead one and everything I create is actually restricted. If you want to keep talking about it, I'll see if I can share it. Yeah, I think if you go to share and copy link, there's another option there to be able to do it. I don't think there is anything actually on the document that isn't actually visible to public. Okay, so let me just recapitulate the list right now. And so we have here, we have some apps that are still in Python too. The maps I'm only listing here are the apps that are actually just critical. We are considering to critical. I didn't look at the authors that we actually have, only the tools that we actually maintain. And not only the critical ones, but yeah, those that we maintain as a CP team, not as in front of launch itself. So there are still some apps we have a Python tool. Some of it has, how to say it, we don't know what we should do with them because we don't really want to keep them. They are just more maintenance for us. But this is why we, and we don't have spare cycles to actually port them to something newer. The apps we have here, I have the link here, but there are only three apps right now, which is not that bad, but still. Next thing is the federal messaging. The most blocking apps here are the FMN and the badges, which are really tied up to the fed message. And they using all the messages that they are consuming will have federal messaging message schemas. It will be hard for us to actually do actually at the get rid of the fed message and escape everything to federal messaging. Right now, we have at least bridge that is converting the federal messaging to fed message. So those up school still can consume them, but it would be nice to actually just have everything in federal messaging. Next thing is the missing documentation. We have plenty of things that are not documented at all. We have apps that were just created and nobody actually created documentation for them. And this is actually adding up each time somebody is changing something in the app. All data documentation is another one that is just continuation of the missing documentation. But we have the documentation, but it's out of date. We usually lack somebody with the knowledge about the apps, so it's hard to actually update it. The missing unit test, we have plenty of apps that don't have unit test. And it would be nice to actually have everything tested is better for any new contributor to actually test out the app. Next one is the OIDC support, not every app has it. We don't have much apps, I think that still miss it. We have a ticket for it, but yeah, there are some. Next thing is services running on all data systems, which are another thing. This is usually because of another tech debt that is actually blocking us to move it to something newer. Next thing is large amount of trackers. We have right now around 130 trackers we should watch as a team, which is not really great. And we don't have that much people to actually watch everything that is going on. We are trying to at least address those that are most pressing. But if we don't have maintainer for every app we actually own in the infra. Another automation, we still have things that we need to automate, but we don't have an automation. We are trying to do it to address this and automate things, but not everything is automated yet. Unification of tools, we have plenty of apps with variable tools. We don't have any unified set of tools that we can actually, everybody who wants any app for infra actually right. We don't have any unification of this. Monitoring of services. I think we don't have that good monitoring that we should have, but we still at least have some. Map of services, this was actually partially addressed by creating map of critical services. Next one is disaster and recovery plan. I don't think we even have one and the last one I have here is the mayor upgrades of your dependencies. Something that should be actually handled by automatically and we should just watch if this didn't break anything and don't have apps with outdated dependencies. Okay, and this is all I have on the list. I'm not sure if anybody actually have any luck with sharing the document. At least copying all the list would be nice. Yeah, kind of at a high level I wrote the list in the hack and de document too if folks want to look over that. Okay, yeah, I will probably share it instead of this document will be better. Okay, let me just switch it. So from my viewpoint, there's no magic bullet here. I think we just need to maybe dedicate a certain amount of time always to working on the tech debt. So, you know, like each quarter we pick something or figure out something and try and work on that. I don't see that there's any kind of magic solution to this problem. Everybody has just the way it is. As I see that you actually added to the wrong topic. Oops, I'll move it. Okay. Yeah. Yeah, I don't think there is a silver bullet. I just want to start the discussion what people think we should do how we should address this how we should actually look into it. And if we actually missed anything that the in front run should actually consider us tech debt in this case. Okay. Oh, yeah. The list of packages we are responsible for maintaining we have, I think, Pakistan for for this. At least I have some something. Let me just look at look for the link. Sorry about that. Oh, yeah, here it is. So I push them. It's an ODS file you'd have to download yourself but that's that. Oh, sorry, I put it into the wrong section. Yeah. That's a Excel file in here. Sheets. Okay. Okay. So, yeah, the packages are something that we have as well. We have some packages that we maintain. Not all of them are actually in this kid. I think the planet is just something that we just we do inside the info info attack in Koji and it's not really in in this kid, which is something that is not really great. So, to address something that David is saying there in chat. I think we tried to do a situation where we had somebody who was a primary contact for each application to kind of watch that application but we didn't get a whole lot of takers on some of the applications. So I'm not sure if that really worked or if we just didn't do it. Right. So we're actually coming to the end of this block if folks have any further thing to add here if anybody wants to jump in from the chat we can add you to the video as well. I think that says we have enough tech debt to where it takes us 15 minutes to talk about what we have even get the solution. Yeah. If anyone in the community wants to take on anything, please do. We're more than happy to give pointers and help out where we can. That's a very good question, Leo. So we have. Sorry, we're just about to run out of time. We do have a github. There's a group that calling github. So I forget what they're called called fedora infrared has a lot of our apps under it. So if you go to the github.com. You can see the apps that are there. There's issues and all of them. So contribution can go there. And also in pigure we have the same fedora info group. You'll see our repost there. A lot of issues in them. Feel free to start contributing. If you're lacking access or something, reach out on IRC. Someone will be able to get it for you. As far as Python three, the big things left are FMN, which is being worked on now. PDC, which we want to get rid of completely and badges. We are actually looking for folks to work on the edges. So yeah, that would be an excellent one to work on. You're looking to maintain the four badges right now. Yeah. All right. So what's our next topic? Okay, let me check. Okay, so this was done. This was done. I see that this one has three votes. And there are questions and answers. So yeah, this is the last one we have here before the questions and answers. And it's mine as well. Okay. So in this case, it was three years ago decided that we will get rid of some of the services we own. The reason for this was to focus on the services that are critical to Fedora and not really take care of services that are just nice to have. We had a blog post when we tried to actually explain everything why we are doing this. What are the applications that we want to get rid of? What are the apps that we want to still own? The issue is here that these apps are still used by community. In some cases, they don't have any maintainer and because we were the last one who actually touched them, we get the issues for them. And sometimes people just complaining about them and we are not sure who actually how we should approach this. We should just ignore these issues. So we have time to focus on something that is more important for the front lines. We know that this makes people angry, especially those that actually like those apps. But we are not sure how to approach this because it's something that is hard to take on. And we don't really want the people to be angry on the front lines because we didn't take care of apps that we owned a long time ago. And that is still in some kind of just support mode that if there is or fix the worst issue mode. And some of the apps we just restarting and not have time to actually work on there. And we in some cases even lack the knowledge because the people who wrote it are not part of the team anymore. Okay, so yeah, I'm waiting for any response. I think actually one thing that we could do better is maybe set some expectations. So like for those apps we could, well depending on where they are located, have something in the top level read me or the issues template or something that says, hey, this app is not a priority. It's currently looking for people to help work and maintenance on it. Please contact us if you want to help blah, blah, blah, that kind of thing. Rather than have them just file a ticket or an issue and then nothing ever happens. Just, that's the only thought that we aren't doing that I could come up with. Yeah, I think maybe updating just to read me of the apps would be be enough. I think Alexandra asks if we should adopt a review process for packages and an orphaning process for use that for applications. We could try and do that, but I think there's some application like badges is a good example. You know, it's alerting and it has problems and it needs to be ported to Python three and we don't have cycles. If we just dropped it, I think people would be very, very upset. Even if we just we announced that it was not working right. So I don't know. I don't know if that's a good solution. Yeah, we're kind of stuck that we want, we want less apps to maintain, but pretty much everyone there has marriage. So it's very hard to choose what to get rid of if we can get rid of any. And when you spread yourself tinnery, you're never the maintenance on the ones that exist naturally gets a bit best target came to her standard. So for existing applications, it's harder because we were created in this situation where there are no rules in and people have different expectations for that. But I was thinking that for new applications, if you set these policies in action, first of all, because you link them with already existing policies of a project, you will get less people angry at you as the team which that deploys them. But it's like a adopted policies in a project and everyone knows why they are there. So you can kind of jump on top of that and then use the same terminology and then say same concept. And then for new applications, it would be easier that people will agree in advance to this process. They will not be surprised when it happens this way. So they will realize that they need to build a process to create new maintainers for their apps in long term. So this is also part of their task and not just the code itself. For all the applications and badges, I see a bigger problem here. But I guess call to actions maybe may help more. I know you did it already, but it's like doing regular call to action maybe. That's good help. In case of badges, we tried to do it multiple times, but we didn't really get a maintainer that will maintain it in a long time. I think the other apps that we actually don't maintain, but it's still getting back is the Nuance here. That doesn't have any maintainer right now and it's still used by folks. But yeah, we don't have anybody maintaining it and it's just so every time we actually every time anything happens with the app, we get the ticket because there isn't nobody else who actually work on it. I think another thing that could help here is moving more things to OpenShift. Because if we have a framework for deploying and if we set that up, here's the app. We deploy it with a source to image build or something like that. And it's all there. And then if somebody wants to take over maintenance, they just need to take over the actual maintenance. And to worry about the deployment and the details of that, they can just push their changes and have them just be there. So I think that would help a little bit, maybe lower the barrier to entry. I have read something that you are trying to create a common best practices policies for OpenShift. I think that would really help a lot because it will really reduce. Yeah, you would be able to separate knowledge of OpenShift from knowledge of the application and deployments from the coding part. Yep, absolutely. And that's one thing where we're really bad right now in our infrastructure. We have a bunch of things deployed in OpenShift, but they're doing all kinds of different things. Some of them are building the image in Quay and importing it down. Some of them are doing source to image, so they're building the image actually in our OpenShift. Some of them are doing layered images. Some of them are doing all kinds of things. So yeah, our best practices there is something definitely that we want to work on. I think our team internally faced the same problems and we actually created a document which we call universal OECI deployment. And we would use Helm charts and it will describe where secrets are going and things like that. And I think we did a talk about this at DevCon last year, if I find this. Yeah, it's maybe a raise a priority for this item because for our team it helped a lot, yeah, definitely. Yeah, I do think that kind of ties in with what Kevin was saying earlier. We need to set expectations and set hard expectations to say these are the ones that were given full attention to. These are the ones that we'll really try, but they're lower priority and these are the ones that we'll get on and we'll get to. It's never going to be popular, whichever ones you pick, but we just kind of have to make the decision. We at least have some service level expectations to refer to some time ago and here they are. Yeah, and another thing we'd like to do, but again, we don't really have the capacity is there's certain ones like PDC, which could be split out into other apps and gotten rid of sort of be one less to maintain. But we just goes back to the proactive versus reactive. If we could be more proactive, we have the ability for it, it might actually reduce how we need to maintain as well. So I'm looking at the last note that standard is a deployment. I would say that this is part of the tech debt that for the unification of tools we are using. So we have similar the release process similar deployment process for your apps, at least the apps we own by yourself. Yeah. Yep, I see definitely the overlap there. All right, so just a few more minutes on this if anyone has further thoughts before we move on. Again, on the marketing part of this, should we, what are the options for a person if we come and say, okay, I'm going to maintain this, what perks I get out of this? Can I rename the project to my fancy name? Can I do something? What kind of, I don't know, gamification or whatever we can do out of this process? Funnily enough, badges is probably the answer there. Yeah, the badge of badges, this is a thing we can do. If you step up and the maintain badges, you can get badges. Wait a second. I'm also thinking maybe of like, should we, again, it also goes back to maintainer policies. So you can say you're becoming Fedora app maintainer now and this is the role. This is the title you can use somewhere. And if you are the maintainer, you can actually look at the app and at any feature you want. So it's on you. You are the maintainer. From a professional standpoint, that's something you could put on your CD. I am the maintainer of this app. Could this great happen? How many people use it? It looks good. I see a question. If there are cookie badges, I'm not sure if we have any. I think there is. I think if you get a certain amount of cookies, you get a badge for it. Yeah, there's a whole series. I don't remember what the lower ones were, but I know that the very top one is the stoop waffle. All right, that's about time for this one. I don't know. I guess we just had Q&A, but it occurs to me that I actually left some time for revisiting also. So let's see. Yeah, well, I guess we could do Q&A or we could revisit some of these topics or throw it to open floor or do whatever we wish to do. Yeah, I will just leave you in a few minutes because I have talk at the time. I want to ask that question, which I forgot to ask in the previous session. Have you followed the Alma Linux build system or periodic build system efforts and what do you think about that and any related stuff? I haven't been following it, so I don't know anything about it. But I think it's interesting that we should take a look. So we stopped using Koji for builds and I was only checking the Alma version because I wasn't at the center as dojo. And they were doing like pulp as an artifact storage, some Python scripts and glue and like celery and container based mock builds and this kind of orchestration. So I was wondering if something of that can be reused in our infrastructure as well, but it's like nothing specific yet. Yeah, at least in the release engineering side there's a, you know, we keep using Koji and we keep bolting things onto it and Koji is pretty darn old at this point. I mean, it's served very long and well, but we keep talking about a Koji 2.0, but that never happens. One conversation I had is that it is really hard to innovate in the area of build packages when you have active project with 20,000 of packages on them and you are always like working within these restrictions. But this Fedora ELN effort which we started recently, so to say, one of the reasons why we wanted it is like to be a separate build route and separate configuration, but also it can be like experimental playground because it has a reduced number of packages like 2000 instead of 20,000. And it doesn't affect the main Fedora deliverables and can be more flexible in approaches. So yeah, I was thinking that maybe we can like deploy a small version of MVP of a prototype of a new super fancy build system just for that specific build route and then see if we like it and we want to extend it. Yeah, that's a good idea. Really the only thing going on right now in the Koji space that I know of is the IoT folks are moving to OS build. So the Koji OS build plug-in stuff is now working. So we'll see how that works out for them. I think in the chat there are the community shift pensions. Can you talk more about this? Or is it the topic is life or what's the state of it? I guess maybe Mark would be best to answer that. Sorry, can you just repeat it again? David was mentioning community shift where we should force David to join in and talk about it. Oh, so where are we at with community shift, is it? So it's started, we started like looking at it, doing some work, doing some implementation. To be honest, I don't know how much of this I'm supposed to say publicly, but it's coming. I don't know how long, a couple of months hopefully, but... Coming back. Coming back, yes. Well, I would say coming to this is community shift 3.0. It's a new one. I hope it's 4.0. I don't actually know. But yeah, it's coming. I think the reason I'm kind of hesitant to talk about it is we haven't like clarified the rules and use and stuff yet. So I don't want to make any promises that I can't keep. But it will be there in some shape or form. And that actually leads back to some of the discussions we just had because having that community shift, I think will also help us on board people. And I think it will help us find app owners too, because they can use that as a development instance and have access to deploy the app, new version that they want to do or whatever. So I'm really hoping we can get that rolled out soon. It is like David Karwin probably has a good idea of this. It will... I'm going to put my neck in the line here and say it'll be here before the end of the year. So yeah, you can blame me if it's not, but I'll blame David, so it's okay. Yeah, so David says the cluster is up. It's actually running in AWS. So just need some work and then it'll be ready. Yeah, I think there's possibly some clarification around legal issues related to GDPR as well that have to be decided. I'm not 100% sure where they are, but that's kind of above my pay grade. But as I heard, they might need to be sorted before we actually release it. Just because if we give free access to everyone, we need to make sure the data is okay. I guess that's the same story for Copper and then for any service which we start to provide for the wide audience. We are forced to create certain roles for terms of services or something like that. Otherwise, this is becoming... We're not in that fully open world where everyone trusts everyone anymore. We have to oblige to certain levels. Yeah, I see Neil posted the old Koji 2.0 talk from 2016 flock. So let's see, do we want to do some Q&A? I guess we're just kind of open flooring it for the rest of the time. How much time do we have left? We're booked until 30 minutes, yeah. I kind of wanted to go back to the five-year plan. Yeah. Because it seemed like there was some question about the scope. And you know, Sandra mentioned Relinge over Infra in terms of the planning process. And I was curious if maybe there was some way to separate them that felt very clear with respect to who was defining process. Do you suggest that we separate Infra group from Relinge group and set up as a different direction for them? Well, maybe that we don't even worry about it. Just set up the... Create the tenants and give it some body. And then if something is determined to be Relinge over infrastructure in direction, then we just cut it out and hand it over. Does that make sense? Yeah, I think... There shouldn't be any border on what happens in the five-year plan. It should just sort of naturally lead to argument over whether it's release engineering or if it's infrastructure management. I kind of have the view there. Obviously, you could disagree or not. But if we're setting the five-year goals or visions, we should set what we'd like to achieve regardless of whether it falls into release engineering or infrastructure. Another bucket. And try to achieve it, yeah. I don't think they should really be separate. Because there's... While there isn't always huge crossover in the goals, there's definitely huge crossover in the people trying to reach them. So I think they should be kind of kept as close as possible. I think so too. And I think it's hard enough to define a five-year goal without having to worry about separation of duties. Yeah. And they will overlap sometimes. Like if one of our five-year goals was to say, I don't know, move more services towards cloud, that's going to affect both. Especially the broader scope of your goal, more likely it is to affect both. Yeah. And I think it'll show a lot of interdependency and that the resulting activities will be a little bit more of a shared responsibility. Right? So that feeling of being responsible will land on the shoulders of everyone. And there'll be a little bit easier time of handing off tasks in both directions. I think that... When I have this... If you focus on migration, for example, to cloud services, you sit within the knowledge of what applications you run currently and you try to migrate them. And you put a lot of resources into that. And even though, like, you maybe have issues, you try to solve issues within this set of applications you're giving, you don't try to go outside of its set and said, actually, because I want to go to cloud, I want to also change the entire usage of that and change the tools we're using. You feel like it is a way too big of a task to resolve your current problem. So if we focus on these tasks alone, it doesn't generate this larger move or evolution of the whole release engineering process or infrastructure. And I'm wondering if we should have a separate dedicated move into that direction so that it will help in such cases. So people don't feel that they are too small to create these moves, steps by themselves. They are not alone in that and they can lean on some larger effort. And that seems to me like something that we would want to do super regularly, like an operational planning meeting, right? Like you'd want to have, I mean, this is a perfect example of that, right? We have sort of minimal documents to read and to manage for the planning, but then that's effectively what's happening, right? So I think the thing that's missing is effectively the documents right now, right? And that we could be ready to do this in the next meeting cycle. Yeah, I think today we've discussed a lot of the how and I believe it's too little of the what. Oh, yeah. Alexandra, I didn't hear any of that. It never uncompressed. So should we go to the Q&A? There's a couple of questions there. How can community people help make relinch slash interest job easier without being part of the team? Oh, yeah, this was this was earlier. So one thing and I don't know how to communicate this, but we wrote up this handy how to interact with our team document that's up on docs. And, you know, it goes through, it's basically a little checklist flowchart type of thing. You know, you have an issue. Is it a security issue? Can you not log in, etc. And it tells you what to do. I think if more people followed through that and, you know, basically what it tells you to do is, you know, is it an urgent problem? Okay, tell somebody if it's not urgent or security or whatever, file a ticket. And so, you know, I think that sort of thing helps us without the interruptions. But, you know, if you don't know the process, you don't know not to interrupt people. So, I don't know, I don't know. Yeah, maybe we should, I don't know, periodically post that discourse, put it as the banner on a couple of the IRC slash matrix rooms or something like that might just get it out there. We do post at the weekly meeting, but I think sometimes when you see things too regularly, you kind of forget they exist together. Also clear descriptions of problems. But again, this is just the kind of a personal thing. I don't know we can tell everybody to be very clear to describe their problems. One thing that happens a lot in my experience is people say, I'm having a problem with Paguer. Okay, are you talking about Paguer.io source dot fedora project.org. Yeah, I don't know. But again, that's something that you just, if you're new, you wouldn't know that that's confusing. It's not even clear that source dot fedora project.org is a Paguer instance, even though it is. So, yeah, I can't think of anything else for that. I mean, it's impossible to see the dividing line sometimes right for new people. I mean, we all have to be trained not to say it's slow. I think it's a good place to start. And I think the best thing you can, we can do is to have a, you know, a how to ask, you know, a couple of how to ask questions sessions to get them to get them more specific. So there's another question there. There's increased usage of ETRFS for inforellage servers. How did this come about was it requested proactive or reactive. Curiosity feeds certain use cases from an internal or from an internal advocate for desktop switch some time ago cloud more recently. Were those a factor? Are there any features you're taking advantage of? Well, once again, we are reactive. So I've had so much trouble with the 32 bit arm builders 32 bit arm. I'm so glad is going away, but it's still not gone. So about a year ago or so, we were having problems with the 32 bit arm builders were crashing. Basically, they would oops kernel loops and fail to build that they were working on or restart it or lock up. And oddly, they're doing this again. But we actually traced that issue or we tried a bunch of things without much luck. And as I was trying various things, I tried a butterfs install and it actually went away. The issue seems to be a weird XFS issue with 32 bit because we were using XFS on them. So we moved that and also there is for that reason is for those builders. But I moved all the builders over actually this last time because the C group handling I think is an advantage to us. Also the compression is probably helpful for builds just because it increases the throughput of things. So it initially was reactive on the 32 bit arm builders. And then after that it was just there were lots of little advantages over XFS or whatnot. So, yeah, that's about it. We're once again having a problem with the arm 32 bit builders. And I think it is the most recent fedora 36 kernel that's causing the problem. I'm trying to slowly migrate them all through an older kernel to see if that's better, but I haven't looked at them this morning. So, it's just it's just be better fs. And I forget ext for or XFS for boot. 64 maybe. But yeah, they're all they're all better fs. And I have changed our default fedora VM installs to use that also so anytime we're in reinstalling stuff, like the coji hubs are better fs the coji packages are better fs. And of course, of course, all the rel stuff is still on XFS. So, yeah, and of course, brute EFI is VFAT. So that's always fun. Let's see, I think that's all the Q&A. So any other questions in the chat. Ask us anything. Okay, to Alexandra that I thought the CPE initiatives was a great thing. But I was I was also thinking that, you know, in terms of the five year plan, who really want to see the vision. And so I think that there's the CPE initiatives are like very structured and tend to have like a, again, a design document. And I think that, you know, we want to probably want to expand that more. And free free people from the need to, to create exactly what needs to happen. So one of the things many years, probably four or five years ago now, I had a goal for one year. I was going to try and make it so that we didn't need to do infrastructure outages anymore. We didn't need to schedule outages. And the big point of fail or the big thing blocking that is our database servers, because, you know, you have to reboot the database server. And I actually had it so that I had them replicated, I had a replication setup, so you could reboot one and switch to the other. And I got that all working except it was a manual process, you had to say, oh, I'm going to reboot the database server. I'm rebooting it. I, you know, bring up the right thing here and change this from replication mode, etc, etc. And that just seemed more prone to failure than just rebooting the database server at that point. So I gave up on that. But that's kind of one of the things that I'm thinking of as a long term goal where you say, look, we want to make our infrastructure, you know, robust enough to where we don't have outages and we can just do a rolling update of things. And, you know, users don't notice any service change or whatnot. Or another example might be to be able to reinstall all of our staging in like a pretty short period of time. And, you know, have it automated so you could say, boom, take down staging, rebuild it. And once you have that, you could actually use that to deploy another production somewhere else if you wanted to, or other people could use that to deploy production or something. But I guess those are more like one year goal rather than five year goals because in five years, I don't know, that's still going to be. Well, I mean, I think it's a great, I mean, I would probably be more like, you know, maybe we want our applications to have isolated their own isolated copy of the data, right? I did the 12 factor app thing right there, you know, so that there's no one app that you go to or one database that you go to that has the final copy. It only has the one that's relevant to that service, right? And maybe that's something that we might want to, you know, we want to carry as a five year plan as opposed to like, yeah, high availability is a really good idea. And it does provide you that, you know, that like transitional space for your outage infrastructure. But it might be like, I think as we're so, you know, we're moving things to the cloud. So now failing the architecture kind of becomes a mandate, right? And so we want everything to be able to just die and come back. And that sounds like a five year goal to me because there's lots of stuff that's going to be wrong. Lots of stuff we're going to want, you know, counsel to weigh in on and committees of all sorts and teams to take responsibility for, you know, in terms of the infrastructure requirements. I mean, you know, Neil was talking about the cloud team looking at Kiwi, and that's a perfect example of that, right? That's an infrastructure change and ultimately has to, you know, has to have someone on the infrastructure team who knows what to do with it. So I mean, I think, you know, I think those that could fit into a five year, you know, there could be a five year plan for infrastructure that I as someone who is in the cloud working group look to and say the way that I will design what it is that needs to make our addition better is according to the tenants and the five year plan of the infrastructure team. So I don't want it to just tell me exactly how I'm supposed to do it, right? Can't be the design document for every service that never existed yet. It has to be a vision document and then eventually it will include some, you know, some technical requirements and those technical requirements we can define on the fly, right? We'll do continuous review of the like the FAQ on how we're getting there and what component parts or what hooks we've provided, you know. Right, but I see the technical stuff as being the near term to get you to the goal, but you don't have that technical. I don't, I don't say I'm, you know, in five years I'm going to have a self healing infrastructure and in four years I'm going to have blah, blah, blah. You would say I'm going to have a self healing infrastructure in five years and this year I can start working on these things that will get me toward that goal. Yeah, because in five years maybe, you know, you did some stuff and then next year some new technology comes along that helps you vastly and you didn't even know about that it existed and now you can move to that. Yeah, I've never done a five year plan to survive three years. Right. Yeah, so and I think that's reasonable. Yeah. Well, when I put that topic there, five years, you know, if you look at the rest of it, it's like is five years realistic is three years realistic is one year realistic, you know, it just depends on what level of technical detail versus vision. Yeah, you know, if you said, oh, I want a 20 year plan and the 20 year plan is that I have a fedora implant in my brain and I can just. Yeah. I think it's okay. I know this sounds stupid, but it's okay to have changed your five year plan every three years. It's it's to provide you direction to a goal rather than say I'm definitely going to do this. You know, in that open source way in our open operations model, right, we have, there's lots of things that help us to make those modifications as we come along. Right. We come across them, right. And and that's that's just like management 101 in the, you know, in the open organization. But then the what I think that, you know, we can, we can really do is we can help to trendset that in ways that that, you know, makes it possible for people to dream up new, new models. And Leonardo's right. It's a vision. I mean, it's all it's, you can't write a five year design document just can't be done. And we definitely need to be aware of the federal 100 problem. Yeah. Yeah. That's that's it. That is a lovely sentiment. We just have to say that I want to be around. Yeah, I want to be around for the Fedora midlife crisis. Will your name it Fedora new and start getting at one. Fedora red Corvette. Oh, it has to be a blue Corvette. Okay. Nick beard Leonardo that's the one we really want Fedora net beard. All right, so we got about 15 minutes left. If anyone has a final topic. Did we miss any anything we wanted to go over again. I think so. I'm talking actually for a few here about the standardization stuff we were talking about earlier. So, I was talking about it or it first came to mind in the context of open shift, you know how open shift is a kind of opinionated Kubernetes distribution right so it's you a framework and you know that if you're deploying something on open shift, all the open shifts are deployed the same way it's like this all everywhere you can count on these things. And that's great. But as I pointed out, there's lots of things that you can do differently like how you create your images and if you continuously deploy or if you know you do layered images or whatever. So that was the first context where I was thinking of standardization but it really applies to a lot more than that. Right now, one of the other problems we have with all the apps is they were all designed by, you know, small groups of people at some point in our history. So if you look at an app like Nuancer, which is the application for voting on wallpapers, if you're not familiar with it, you know that was built by a small group of people many years ago. I have no idea if I look at this app, I have no idea how to like do a release of it. I, you know, is it just tagging something and creating a tar ball and you know putting that over as an archive or is there more to it. I mean, you know, the people who did the releases are long gone. So having standardization at that level like your app, here's how you released a app or here's how you should release an app if you're doing the standardization. That's, you know, that's another place where things could be standardized. Another place where things could be standardized is our Ansible playbooks have been written by, you know, 50 people over the last 20 years. And so some people had a style where they did this and some people had a style where they did that. And so it's very uneven, it's very, you know, there's no standard annotation, there's no standard style of how you write your plays or anything like that. And, you know, it all works, that's fine. And it's all syntactically correct, but having it be a consistent style helps people, you know, on board easier and know how they're supposed to write stuff and so forth. So those are kind of the areas I was thinking for standardization that I'm sure there's probably a bunch more. Yeah, packaging is an interesting question that actually came up on the infrastructure list fairly recently. In the past, the way we deployed applications was we wrote an application. It had a release. We packaged it as an RPM that RPM was built. You know, send as an update to Apple or Fedora, whatever. And then we deployed that RPM was actually the thing that we deployed. However, now in the OpenShift world, that's not actually the case for all our applications. We have applications now that don't deploy that way. So let me think of an example. Noggin, our account system, is actually deployed by Git branch. Noggin has a production and a staging branch upstream, as well as the regular development stream. And when they want to do a deployment, they push commits to staging and OpenShift sees that, builds an image, deploys it, they can test it. It's fine. They push a commit to production. It just deploys it. So that is actually not going through the RPM thing. And in some senses, that makes it easier because then the upstream folks don't need to worry about packaging. They don't need to wait for RPMs. They don't need to, you know, have delays, et cetera. But other things are lost because then you don't have an easy way for other people to use it or at least, you know, easily use it. You don't have bugzilla bugs. You don't have all the infrastructure that around Fedora packages where things are built, check to make sure they can install the dependencies integrated with the rest of the system, et cetera, et cetera. So that was a discussion on the infrastructure list, you know, should we mandate one or another of these approaches? And it's really difficult to say because the new style of thing is deploy it from Git and then you install your dependencies and you're just building on a layer of like Fedora 36 or something like that. So it's a complicated question. I'd love to hear some input from folks. I haven't been following the chat. I think I'm in the minority here who thinks we shouldn't package everything. I think if it's a blocker to some people, they're not used to packaging, packaging takes a bit of a, like there's a learning curve to it for us. If I can just have a Git or repo that I can automatically get pushed open-shift every time I want to cut a release, it invites more collaboration. People don't have to worry about the packaging side of it at all. Yeah, it's, you're muted there, David. Thanks. Yeah, I think there's definitely some exceptions, you know, very clear exceptions. And there's a lot of, you know, romance for sure around, you know, GitHub actions or whatever forge actions. So I know that there's not going to, you're not going to get away from it. But I think in general, it's a, it would be the best practice and then the exception process would be some sort of a review and move forward. Because we don't want to get out of, you know, whatever's running, we don't want that to get so far out of alignment that we're suddenly stuck on Fedora 36, you know, images or containers, right? And then they can't move forward and we're suddenly stuck six months later with nothing we can support. One of the things that those packages do that's difficult in my mind for the, for the packaging side, like Noggin does this, it has, I forget the name of the app. There's a GitHub app that basically checks all your dependencies. And then when one of your dependencies updates, it says, you know, update your dependency this they had just updated, I ran CI on it, it's, it's good, it works. But what it does is it basically says, you know, I need this version, I need that version right there, which makes it really, really difficult to package because Fedora has that version over there. And does it work with that version over there? Well, maybe. But the, the application itself is saying I tested with that right there. So it makes it difficult to package it as an RPM unless the application is, is open enough to a range of versions that are available in the distribution. Right, you could get out and get to a case where, you know, upstream move to this new thing for hasn't moved to it yet, and you can install it anymore. You know, that version. I think poetry is the name of that. You know, but I think one of my biggest frustrations, you know, is that, is that there is such an easy willingness to do that and I see, I see what that looks like, you know, and in, I mean, I see it every day right in my day job. And, and what one of the things that I see is that there's this willingness to forfeit things that make for compatibility right like, if you go back and look at the AWS C libraries there's no shared object names. Right. And they're like, why you're, you're, you're tip of the spear. That's what you do. Right. And that's not helpful right in, in, in long term support. You know, I mean, so, and we have that, you know, maybe we don't, maybe we, maybe we're always forward thinking on the one hand on the other hand, we do still have to consider what are the requirements of long term support. And how do we make sure that that's, that's consistent. If we're going to, you know, if we're going to make sure that, you know, this is a goal, you know, that we're making this a relevant experience to a supported operating system. You know, ultimately, you're saying, you know, you could use, you can use what we do to build to build enterprise software. And if what we provide is some way to, you know, leave behind anything that was supportable. Another thing that started this whole thought process for me on the standardization stuff was that we have some applications. So picture an application. It gets deployed, it's deployed on Fedora, Fedora 36 container and OpenShift. And, you know, the application's there, it's working, everything's cool. And, you know, there's no real changes to the application, you know, there's, there's a few bug fixes, but they haven't done a release yet. And, you know, it's nothing, nothing major weeks go by, months go by, year goes by. That application is still running the Fedora 36 container that was originally deployed when it was originally deployed. There's nothing that made it deploy a newer version or anything. So that's another thing that we're really bad about right now in our OpenShift. Things that deploy rapidly or often get updated containers, but things that don't or are stable don't update or don't get that updated container. And that's bad. That's my infrastructure brain does not like that. I want the update. I, you know, I want it all to keep working, but I want the latest version to be there. So that's another thing that we need to look at doing. I think that maybe the possibility here is not to do like a mandatory requirement that for every package app there isn't RPM or whatever. But we could have like, what's the goal behind it? The goal behind it is to know how to build this thing from the sources and not like to have this black box containers which no one knows what's inside. And so we would say, I would say like the easy way how to make your application compatible is to make an RPM out of it and show that it's possible and then you're done. But if you're unable to do that, then you enter the next challenge which is describe the build process, CI process and deployment process of your application in such a way that it doesn't look like a black box. And then infra team would be reviewing that manual and saying like, okay, we don't like the detail, the level of details go deeper and so on. So we would be saying like, if you do RPM, we don't even look into it. We say all good, go, go. But if you're not able to do that, then you enter the hard way and then we have to deal with that. So we're not saying no, we're saying this is the easy way and this is the hard one. I think Leo kind of brings up a point in the chat there. Something I was thinking myself is app developers generally don't care about the climate. That's probably a bit of a generalization, but you get what I'm saying. They want their app to work and it's infrastructure's problem that's supplied. So they're ready for what they want. So I think there's a bit of a different stair point where we have to kind of say, are we going to support them? Are we going to force them to support us? Or is there a give and take? I would say this is where we're kind of, again, should show the leadership role. Indeed, in many startup kind of companies where DevOps is a trend, they don't care where the code comes from as soon as it can be packed and container deployed and it runs, then no one looks inside and everyone is happy. But it's actually, it's not a modern solution, it's a modern problem. Everyone who does it, they recognize that it's not a good way of things. That's how startups will leave, but no one wants to leave this way after one year. And in our case, we would, like, I don't want to fight against it completely, but I want us to also push for what's a good thing here, why we're doing it, and to show to people that there are reasons and so on. So I wouldn't treat this as a given. Developers don't care and this is something we cannot change. I would say this happens, but our goal should be also to change this trend and to also explain, be not just our Fedora infra, but also be teaching people about good practices. I think some of this happens also when apps are first deployed or first land, right, because then the app comes in and we say, well, okay, here's your app. Can you tell us how does this work? How do I deploy this, et cetera? So I think we are out of time, but if you guys have any last thoughts, please go ahead. Just thank you for organizing this. This was a really interesting session. Yeah, thanks for proposing this cabinet. I enjoyed it. Yeah, it was fun. We should do it more often. Yeah. All right. Well, thank you guys. Yeah. See you. Thank everybody in the chat. Enjoy nest.