 I think I need to change my talk probably based upon the audience now. So you know like we do nightly composes of raw hide and branched and other stuff and even both of these composes to compose the repose and then publish them out. So the thing that we used to do the compose is bungee but the thing that we are missing is the tracking of these composes like whether it failed, where it failed, how it failed and who is responsible in fixing those things and how long they have been failing because release engineering has a lot on their plate. They can only concentrate on the blocking artifact failures and they can look at non-blocking artifacts but they might not have time to fix them. So that's where we are introducing Fedora Compose Tracker where it will track each and every compose and run like ping people about if the failure is related to their image and then do some tracking and from which we can make it as a policy where we can let the people know that hey, your image has been failing for like last month and if you don't fix it, in another month we will remove it out of Fedora, something like that. That policy is not set up yet but maybe we can. So the topic is what are the current pain points and why we wanted to introduce this one and then the actual Compose Tracker, the URLs and stuff and what it does. And where we are right now in that development and where we will be want to see, I mean, future features that we want to add to that. So pain points. As I said, Panji does the Composes and nothing tracks after it and since there is no tracking, there are a lot of people who are trying to fix the Composes but there won't be any communication between them. So today I see a failure and I will go and try to fix it. Probably somebody already fixed it and started the Compose. So there is no real communication between the people. So we wanted to have a platform where we can all these people who are working Composes can talk to each other and see where things are. And the logs in Panji is very hard to find things because it's probably like a few thousand lines and you really need to know where you want to check. And there are like multiple places Panji will dump the logs and you really need to understand where to find them. And especially for like people who maintain spins and labs, they don't know where to check these things. But they understand the failure fee, give them the Koji task failure. So they can look at those things and they can fix things. But where can they find this task is a bigger problem for them. And the other thing is no statistics. Like when the Compose failed and how long has it been failing, or probably it has been failing for the same reason for the last few days, we don't have any statistics for this. And we want to add them so that we can add a policy around it. And then there is no proper way of communication of these failures to the special index groups where there were instances where some of the images were skipped from the RC Composers. And then they got back to us to release engineering and asking us, like, hey, we don't see your image. Why? What happened? Since it was not release blocking artifact, we never blocked the release, but it was a pain for release engineering to go and rerun those images and then publish it at a different place because they're not part of the original Compos. We are just making unofficial images. So it's a pain. So the Compose tracker is actually started by Dusty Mabe. And so we, release engineer took from that idea and then we started exploring more options around it. And so what it does is it will look at the Fedora messages that the release engineering sent out and it will listen to those messages and sees if a Compose is anything other than finished. There are three statuses for Composers. One is finished, which is great. Everything is fine. The other two are finished incomplete and doomed. Doomed means it's a complete failure. Whereas finished incomplete means it sort of finished, but it also didn't complete the generation of some images probably, which are not release blocking. So those sort of things it will track, because for example in spins except for KDE, any other spin is not release blocking. So when we run the Compose, if it is, let's say, if Cinnamon spin failed, we will still publish those Composers out because Cinnamon is not release blocking. So we want to track all those things as well. And the source code of this is currently available in Pagger. Pagger.io slash relinch slash Compose dash tracker. And where we file this ticket is the second URL. It's under relinch failed Composers. So that's where we file the tickets. And currently it is deployed in staging and it is tracking the real Composers, but it's still deployed in staging, so we are not using it. But at the same time it is also deployed on Dusty's OpenShift instance where we got the idea and started moving into relinch. So people were using that already. So we thought like, okay, let we will use that instance that is already running. And when we move to prod, then we will use the relinch maintained prod version. So currently we are tracking all these things over this particular URL, which is probably you cannot see it's Pagger.io slash Dusty dash failed Composers. That's where we are currently tracking things. So the current situation is, as I said, it's only running in staging, and it's doing its thing, but we are not using it because we are just testing things. And it just tracks the Composers and files the tickets of all the failed tasks and stuff like that. And with the link to the Koji task, which will be really helpful for these people so that they can go into these failed tasks and look at them and probably fix the issue. And then it will also add basic tail of the Composal of what happened, just a couple of lines from the tail. So that's all it is doing now. And the future is currently Tamash Hashka and probably pronouncing his name wrong, but he's working on adding more features. And so Tamash and I are working on this, and he's picking up more stuff on this one. So the idea is to ping these SIG maintainers, like in the previous example, like Cinnamon failed, like ping the Cinnamon maintainer and ask him, like, hey, it's been failing. Check what's going on. And we also want to skip filing some of the tickets because probably last night Compos failed of the same reason from tonight's Compos. And we don't want to add more and more tickets to it and pinging these people like your Compos failed or your image creation failed, your image creation failed every night. So we want to avoid creating more tickets, and we still want to ping the people in the same ticket that we created earlier. And also add some ticket labeling so that it will be easier for us to figure out like what was the Compos that failed, whether it's a Bodhi Compos, which was part of the updates, or it's a Knightley Compos, which is a Rohal Compos or a Branched Compos or container Compos. So easier for figuring out things when we look at them. And also we want to add some time tracking, how much it has taken for each task. So to give an example, a few days back, we know there was an issue with S992 X image builders because they were using, they're not using KVM. So when we moved to the KVM, we see, we have seen a bigger jump in the Compos because it was taking about 30% last time. So sometimes it's better to track those times so that we know what image is taking long time and then probably if something is happening, probably a builder issue or something, then we can go and figure out what's causing that issue because generally it will take about... Like before we fixed some of those things in S992 X and PPC64 early, it used to take an average of 8 to 10 hours. So people really don't know which one has taken a long time. So we thought of adding those statistics as well. And then add some information of the failures, like how long it has been failing and why it has been failing, if it is the same issue, and use those statistics to come up with a policy. And this one is... It's in brackets because I don't think we'll ever get to it. But definitely useful. Trying to make it smart to understand if the failure is from something that happened before. So probably when a new person comes into the team, he can look at the previous one, like link to the previous one, and like, oh, this happened before. Let's see, what did people do to fix it? Was it constantly happening? So something we need to do. So some sort of smartness to that. But I don't know how difficult it will be. So we'll see. And I guess that's it. Questions? From this website? Sure. Do you want to see the staging one or the one that we are using from Dusty's instance? Oops. Okay. Relange failed. Composers. I'm not sure if I'm typing it right, but okay. I think I got it right. So something like this, if you go into it, then it will have the link of the failed task with some basic tail messages from it. And you can get to the failed task. And it will also link you to the global log. But if you look at it, it's very... It's long and hard to debug. What are all the things that have been failed? Yeah. Sorry? Yeah. Yeah, ticket is filed here. Yes. Again, it's the staging instance. But if you remove... Once we go through the security audit, because we deployed this one on our intra-open-shift instance, it has to go through security audits. And once the security audit is done, we will deploy it to Prada. Any more questions? You know. But do you think, is there any advantage in tracking them? Yes. Probably we won't... But then they would probably need some different interface, because it doesn't make sense to... Yeah, yeah, yeah. Everything worked. Everything worked, but at the same time, it took like a gazillion minutes to complete. But that is something to think about. What we can do is... As part of the compose, we send out an email to the relinch-cron-list. So probably we can add some of those checks over there and send an email to the list and all the tasks and how much it has taken. Probably after we sync the compose. Or else, I don't know how long it will take to go back and check all these logistics. And if it takes longer time to sync the compose to the mirrors, then people will be mad. So probably we can do something like that. It's federal messaging. Yes. No, we moved to federal messaging. So message schemes are available on this repository? Message what? Message schemes like... No, not on this one. But we only are looking at the compose messages. So... You want to look at the schema of that message? Yeah. No, this is not. It's just consumer. Yeah. I mean, probably... I still didn't figure out how to write it. No, the thing is, he asked about whether this service emits any messages. This doesn't. There might be some pager messages that might be sending out automatically for every issue that you create and stuff like that. Probably it does that, but not intentionally. Yeah. Sure. That's not a bad idea. And currently, there is a website running that tracks the composers whether it failed or not. I don't think I will take that one because it's written in Haskell or something. Yes. We can try something in Python. Definitely not in Haskell. I have no freaking idea how to do that. There are people who have their own websites tracking composers, but not at higher level whether it's composed or not, stuff like that. Probably we can add that one as well, like have some graphical representation of all the statistics. And probably give it to Matt. He can use it next block. Any more questions? Well, you're there. Can you look at today's components that just failed? Sure. I believe, strongly, that it was due to some problems. Get text? Yeah, yeah. It's the get text and I think I got the no. So Miro picked up the get text and added the last build. So anytime soon. Okay. Yes. So, yes. We have added a bunch of stuff today that have been F2B available from source from last six months. Yeah. So, yeah. Lot of issues. That was well as than I expected, actually. Yeah. Yeah. Any other questions? Okay. That's it. Thank you.