 I'm Adam Williamson. I work on the Fedora QA team at Red Hat, which I've been doing for the last 10 years for my sins. This talk, I'm kind of surprised to see so many people, but it's great. It's not going to be super exciting. It's mainly just a dump of information about a bunch of things we have running in infrastructure that do useful tasks that not many people know about. And I was going to write this up for the rest of the members of my team, so that at least someone other than me would know about. And they thought, hey, if I'm going to do that, I may as well turn it into a presentation. So here we go. So these are the robots that I'm going to cover. The ones that I'm mainly going to cover are one through four. I'm going to briefly touch on five, but this talk is not really focused on those two things because they are independent systems that have a lot of documentation. They're very big. They have whole teams behind them. You can go and see talks on them. So this is more about these smaller things that people maybe don't know about. And I'm also going to talk about some things that are not quite robots. I wouldn't put a yet on the end, but there wasn't room, which are things that we have that are sort of partly automated or not really automated that I really wish were automated, but we just haven't done them yet, and I'm going to explain why. So a bit of basic background before we get going. I'm calling these things robots because it's a cool name. But what they really are is FedMessage consumers. Who is familiar with FedMessage hands? About a third half, so maybe I'll go over it. FedMessage is, as it says here, a Fedora-wide message bus. The idea was that pretty much anything in the Fedora project, when it does a thing, can send out a message and then other things can listen to that message. So for instance, if someone files a bug, there is a FedMessage. If a compose finishes, there is a FedMessage. If someone edits the Wiki, there is a FedMessage. And the message is just a little bundle of data that says this thing happened and some more information about the thing that happened. It's supposed to be replaced soon by a thing called Fedora-messaging. Do we have any FedMessage people in the house? Or no? There's going to be a talk about that, I believe, at this conference too, so you can go and see that if you're interested. Apparently it's all going to be majorly backwards compatible and by magic, we're not going to have to do anything, so we'll see how that works out. The things that I'm going to be talking about in this talk, the things that I call robots, are written under a pattern which is kind of called FedMessage Hub Consumers. Which I discovered when I needed to write the first of these things, and it's actually, once you get the pattern, it's quite easy and quite short to just write one of these things, which whenever a FedMessage of a certain type shows up, it does something. So I just adopted this pattern for a whole bunch of different things. So all of these things look quite similar to each other, because I wrote most of them and they all kind of work the same way. Taskatron and the pipeline, again, as I haven't mentioned those yet, they both basically do the same thing, but they do it a little differently. Taskatron has a much bigger framework called Triggers, which kind of wraps up FedMessages and abstracts it a bit. And the CI pipeline sort of consumes it through Jenkins, I believe, so it's not exactly like a FedMessage Hub consumer. It doesn't look like this, but basically it consumes the FedMessages and does things when it reads them. And another couple of things I'm going to mention, so quite a lot of these robots use a thing called FedFind, which is a Python library slash CLI I wrote back in the ancient days when we didn't have compose metadata. So in order to find a compose to work with it, you had to know a bunch of magic information about where it would be, basically. And FedFind was all that magic information wrapped up together so that you didn't have to do this manually. I was hoping when we had compose metadata FedFind would go away, but in the sad way of the modern world it kind of didn't. I've managed to throw bits of it out, but other bits of it are still useful. You know, you don't want in every application you write to re-implement, okay, let's go and download the compose JSON and let's read the JSON and let's do some very standard things with it. So FedFind has kind of become a library for doing that. And it also has helpers for doing stuff like, what is the current release of Fedora? This is surprisingly difficult to do programmatically. I mean, it's not really difficult, but it's harder than you think it would be. So it has a helper for doing that, which I use all over the place. Parsing compose IDs. It has a massively over-engineered helper for parsing compose IDs. It's just a bunch of little things that I wind up needing quite a lot. Oh, by the way, all of these light blue things here are hyperlinks. So if you download the slides for this presentation, which will be up on the shed site, you can click on these and it will take you to the pages for all of these things where you can learn more about them. And quite a few of these robots use a thing called Python Wiki TCMS, which I once did an entire talk on and it's very crazy. Basically, we store a lot of test pages in the Wiki for Fedora, which is crazy, but it's what we do. And sometimes we need to interact with those. We need to recreate them all. Like I told you, it gets really crazy. We need to edit them. We need to do things. Python Wiki TCMS is a library that does this. If you follow those links, Wiki TCMS with the capital W is what I call this whole crazy system. And there's a Wiki page that tells you all about it. Yeah, it's pretty hairy. ResultsDB conventions. It's basically a helper for submitting results to ResultsDB, which ResultsDB is kind of the standard for storing test results in Fedora and Red Hat, which is kind of neat. And this is just a way of submitting results to it. And my goal was to sort of have this library define conventional ways to store things in ResultsDB, so various different systems would store results in the same format. And right now two things use it. So it's technically a standard. This is used by a couple of these consumers, so I thought I'd mention it. So moving along, let's go through each of these robots. All of these things have documentation and you can follow these links and see them, but the thing that was missing is there's no overview anywhere that says these are the things that exist. You'd have to sort of go poking around the Fedora QA Pagger project and like, hey, what's that? What's that? What's that? So the idea is just to say, hey, these are the things that exist. They're there worrying away in the background doing useful stuff that you may not know about. So OpenQA Scheduler is kind of the big one. So we have these various automated test systems in Fedora. We have Taskatron. We have the Pipeline. We also have OpenQA, which I'm heavily involved in the maintenance of. It's a graphical automated test system for Fedora. It was written by the SUS folks and we kind of collaborated SUS on it. It does a bunch of tests on composes and updates and it's really useful, but it's not the focus of this talk, so it's kind of all you need to know. The fun thing about OpenQA is it doesn't do scheduling or anything like that. It gives you an API that you have to... It gives you an endpoint that you can touch and say, hey, run some tests, but it's your job as the person maintaining the OpenQA deployment to, you know, when something shows up, run the tests on it, right? So the project that we have is called Fedora OpenQA, and that's the kind of the wrapper around OpenQA for Fedora that runs tests and so on. So a part of that is a set of these FedMessage consumers, and the first one is the OpenQA Scheduler. So basically what this does is it listens out for FedMessages from the compose system, which is called BODI, and BODI will send out a message any time a compose finishes, whether it finished successfully or didn't, and this consumer listens to those messages. If it sees a finished or a finished incomplete compose, it says, okay, that's a compose that actually worked. I'm going to test it, and then it, you know, it uses FedFind at this point to find out various things about the compose, like which images it has that can actually be tested, and then where those images are, and then it sends, it goes and touches the OpenQA API endpoint and says, hey, look, there's a compose. I want you to test these images and off-go those tests. So this is all, you know, completely automated. Before we had this, we used to have a cron jump, which just ran every hour and said, I wonder if there's a new compose. This is much better. The other thing, obviously, we test updates in OpenQA as well. So when you create an update in BODI as a package, again, a Fed message gets sent out, and when you edit an update in BODI as a package, a Fed message gets sent out. And the schedule listens to those messages, and it then does a bit of filtering because we don't have enough resources in OpenQA to test absolutely every update that goes out. So it says, hey, are we going to test this update or not? If it decides we're going to test it, again, it sort of finds out some information about the update, and then it goes off and tells OpenQA, run these tests on this update. It uses a thing called the OpenQA Python client, which is exactly what it sounds like. It's just a little wrapper library for talking to the OpenQA API in Python, which I maintain. I didn't hyperlink that, but I probably should. You can Google it. So it's part of a project called Fedora Unscore OpenQA. So that links you to the Pagger repo for this project. And the second link, the OpenQA dispatcher Ansible goal. So in Fedora, all of these things are deployed within Fedora infrastructure, and Fedora infrastructure is maintained by Ansible. There's a big repo which contains a whole bunch of Ansible playbooks and inventory files and host variables and all of that stuff. And all of these things are deployed through that. So this just links you to the actual goal that deploys this robot. So in case you want to know how this actually gets deployed and you need to maintain it, that's where you go to change that. Also, all of these robots are deployed on the OpenQA server machines themselves right now. That obviously makes sense for the OpenQA scheduler and the next thing. It doesn't necessarily make sense for some of the others, but it's a machine that can run them and they have root access on it. So that's why they run there. Also part of Fedora OpenQA and part of the same role are the reporter consumers. So the other thing that OpenQA doesn't do is when the tests are finished, send the results anywhere else. It stores them itself and pursues their one true test system. So they're fine with that. But we actually want to send those results somewhere else. So Fedora's OpenQA sends out Fed messages as well. That's a plug-in for OpenQA. So whenever an OpenQA test finishes, Fed message goes out and says, hey, this test finished. So we have consumers which listen out for tests finishing and then a simple one takes that test result and forwards it to ResultsDB. That one is pretty easy because ResultsDB is basically a key value store. So what we have to do is consume the OpenQA result, do a little bit of munging on it to put it in the format we want and send it to ResultsDB. That part of it uses ResultsDB conventions, obviously. The much more fun one reports the results to the wiki. So what this means is that it edits a wiki page and if you've ever seen, just so you guys know what I'm talking about here, let me see if I can get a browser up there. Yeah, I can. I'll just have a look. In case you've never seen one of these before, these are the pages I'm talking about. So we're actually going to talk about this a bit later as well, but we have these big tables of test cases and results. So all of the results you can see here from Coconut are actually results from tests that ran in OpenQA and then got forwarded to the wiki. So what this robot does is basically it edits this wiki page and stuffs that result into that table. And all of the craziness for doing that is in Python wiki TCMS. And it's super fun because this is also human editable. The results that are not from Coconut are done by humans. So it has to be able to deal with human vagueness in editing wiki pages. So that's super fun. But it works much better than you would expect, believe it or not. So we report the results to ResultsDB. The results we report to the wiki are results from the compose tests and it does a bunch of parsing and says, hey, this was a result from this OpenQA test. That matched to this result on the wiki. I'm going to file it and blah, blah, blah. So yeah. That's another thing that these consumers that my robots are doing. Next one, check compose. If you're subscribed to the test or deval mailing list, you may have seen these mails that come out every time a compose happens that says compose check report. And they say something like, hey, here are all the tests that pass. Here are all the tests that fail. We also have read data about maybe memory consumption change since the last time. That's all done by check compose. Check compose itself is a utility for generating that report for an arbitrary compose. The robot listens out for OpenQA tests finishing and when it detects that all the tests for a compose are finished, which is a fun hack, it runs check compose on that and sends the result out to the mailing list. So that again, for instance, if you notice that these mails were mysteriously not showing up for a few weeks, this is the robot you might want to go and check and see if it's working. So the robot, the consumer is part of the check compose project. So the repository contains check compose and then it contains a consumer file as well. And there's an ansible role called check compose, which just deploys the fed message consumer and the script itself. And that is deployed on OpenQA because, hey, why not? Next one, so you know those crazy wiki pages I was telling you about? They not only get edited by the OpenQA result reporting robot, they are created by a robot, which is called rail valve consumer for historical reasons. So ages and ages and ages ago these validation events were created more or less by hand. We had a page we called a template, but it wasn't actually a wiki template. It was just a wiki page that you would open and copy and paste into a text editor and manually go through entering, you know, the compose ID that this was a test for and then you paste it back into the wiki and because there were four or five of these pages, it would take you about half an hour every time you did it and it was a complete nightmare. So a few years ago I decided to make that better. So I wrote a tool for creating those pages basically. There's a very complicated real template system in the wiki which absolutely no one but me understands. And to make the wiki create one of these pages you enter a sort of magic template string. RailVal is a Python library that sort of does that. You tell it you want to create an event for so and so compose and it enters the magic strings to the wiki and this causes the pages to be generated. RailVal consumer is a robot, the fed message consumer, which listens out for new composers showing up. It then runs through sort of a set of heuristics. It's like, hey, have we had an event in the last three days? Then we probably don't need a new one. Have we had an event in the last 14 days? If not, we definitely need a new one. If it's between three and 14 days it says, hey, have any interesting packages changed because the last time we had an event and if they have it creates a new event. And then that effectively runs RailVal and tells RailVal, hey, create a new event. RailVal goes and talks to the wiki templates and the magic happens. These pages show up in the wiki. So this is also obviously using Python wiki TCMS which is what it was originally designed for. So it also sends an announcement of the new event out to the mailing list. So if you're subscribed to Test Amounts, which Devel is subscribed to, so if you're subscribed to Devel you get these mails too, you'll have seen these mails go out saying, hey, a new release validation event was created. That's this. So if that stops happening, if the event stops showing up or the mail stops showing up, this is the thing that probably broke down. So this, again, this one has its own repository. It is its own project, RailVal consumer and there's an answerable goal for it. And again, it's running on the open QA service. That's where it runs. So you ever wondered how those things happen? That's how these things happen. Is this the final robot? Wow, I'm going through fast and I thought I was. Yay. The final of my little robot is called Auto Cloud Reporter. So we actually have one more automated test system in Fedora, which is called Auto Cloud, which we kind of wish didn't exist, but it's still rolling away somewhere. What that does is every time it sees a new cloud image pop out of Koji, it runs some tests on it, and it doesn't do anything with results. It keeps them itself. You can go to Auto Cloud and ask for its results and it has a web UI. But Auto Cloud doesn't have any ability to report these to results DB. And about a year ago, actually, or two years ago, one or the other, this was kind of annoying because we were trying to get everything into results DB and I said, well, Auto Cloud does send out Fed messages. So I wrote this thing. It listens out for Fed messages from Auto Cloud saying hey, I finished the test and then it goes and grabs the result from Auto Cloud and forwards it to results DB. So again, it uses results DB convention, so it actually files results in exactly the same form as the results from OpenQA Compose tests. So if you look at the results in results DB for Auto Cloud and OpenQA, they look exactly the same because they're both going through this library. I had something to say there and I totally forgot what it was. And it's not in my notes either. Oh, well, never mind. But yeah, that was, you know... Oh, obviously, if you look at results DB and Auto Cloud results are not showing up in it, this is probably the thing that broke down. Again, this is its own project because it's not part of Auto Cloud. It's like a sort of little module backing around it. It lives in its own repository and it has its own sensible role and again, it is deployed on OpenQA. So yeah, I did say we're not going to focus on these, but you know, just to sort of round out the talk, it would be weird if I didn't mention them. Taskatron was originally intended as Fedora's standard automated test framework. It runs tasks in response to events, is basically what Taskatron does. In practice, what it's doing for us right now is it runs very generic package tests. So it runs things like, you know, RPM lint, RPM depth check, tests that apply to every package. Whenever a package gets built or in the case of a couple of tests, it runs them when the updates testing repository gets refreshed. And it stores results in results DB. It's actually where results DB originated. And this is not like the things I've been telling you about at all. Taskatron is a whole big project that has its own identity. It wraps Fed message itself. So it's nothing like any of the other things, but I just wanted to mention it. And the CI pipeline is the sort of newer standard automated test system for Fedora. And what it's mainly focused on doing right now is running package-specific tests. So if you've seen this whole thing about putting tests in your package's Git repository, and then they will magically get run, this is the thing that's doing it. It's basically based around Jenkins. There's a whole lot of other detail to it, but again, we're not really focusing on this here. And it's results are stored in results DB as well, although not at all in the same format as Taskatron's results, which is something we need to fix. But again, it has its own Fed message implementation, and there's a whole bunch of documentation in the wiki. If you just look up CI in the Fedora wiki, there's a lot of documentation of this system here. Or you can go above Dominic, who will tell you all about it. So the other thing that I did want to talk about, sort of specifically in this talk, are not quite robots. There's probably more of these. These were the only two I could think of when I was writing these slides, very tired, a couple of days ago. So we have these things which are important, and again, I wanted to brain dump and make sure people know they exist. In case I get hit by a bus, then someone's going to have to do this. But they are not robots, and they are not automated. So this thing called Update Trackers. If you've ever dealt with the Fedora release process, you know we have this whole thing around blocker bugs where you have to file them on the bugzilla and you've set them to block a particular tracker bug and then there are meetings and tags and all sorts of magic happens. Those tracker bugs have to get created somehow. And again, up to a year and a half ago or so, that was done by hand. Just every time a release happened, someone, usually me, went into the wiki and copied and pasted all the text and made it for Fedora 29 blockers and it took about two hours and it was stupid. And then I got sick of doing it, so I wrote a script to do it. It's called Update Trackers. And you just run Update Trackers 29 and say Fedora 29 just came out and it will go out and create the new tracker bugs for Fedora 29. It goes and edits the 28 tracker bugs and takes some old aliases off them and stuff. And it also edits the wiki because I love writing things that edit wikis. There's a page in the wiki called Housekeeping, I believe, where there's actually a list of all the tracker bugs going back to like Fedora 10 and it's a bunch of wiki tables and I had to edit it by hand and transfer bits of the tables from one page to the next from one part of the page to the other and it was a whole mess. So it does that for me too, so I don't have to do it manually. So basically, someone has to remember to run this script the day a new Fedora comes out or else the tracker bugs don't get updated. We actually create them two releases in advance. So the Fedora 31 blocker trackers exist right now because they were created the day 29 came out. It uses Python Bugzilla, which is a Python library for interfacing with Bugzilla because it needs to create the bugs and Python wiki.tcms. It doesn't really need wiki.tcms except that the Fedora wiki uses open ID authentication, like fast authentication and the wiki.tcms is really just a wrapper around Python NW Client, which is a generic media wiki client library. This would use that directly except for the auth thing. So it just uses wiki.tcms because wiki.tcms deals with the authentication. The other one that is not automated. We have this thing called blocker bugs. Again, if you've ever dealt with the Fedora blocker process you may have come across this. I'll just show this in a browser again. Does anyone have a cop suite for Kevin? Come on, network. You can do it. Anyone want to sing a song while we wait? Audience participation. You may have seen this thing. It's spreading. It's basically a sort of web UI that wraps around the whole blocker thing because there are certain things it's awkward to do in Bugzilla. You just want to see the current blockers. It's a bit annoying because you have to know the aliases or you have to have bookmarks or whatever, finding out the status of all of them. It shows updates. It does a whole bunch of convenient stuff. But if you look at this thing, where is the top bar? I don't want to get to the front page here, if I can. I don't know why it's not showing the top bar. Wait, is it because we just rewrote it and it's broken? Oh, this thing. I think the screen isn't wide enough. If you go to blockerbug slash current, it redirects you to the current milestone. There's also this list of active milestones. Obviously, every time a beta release comes out or a final release comes out, these need updating. The current milestone is now different. The active milestone is now different. This isn't automated. Someone just has to remember to go into the admin front end for this thing and edit it by hand, which again is an annoying manual task that's a pain to do. We have a ticket open to automate this and just make it happen so we don't have to do it and we've just never got around to doing it. It would be nice if that happened. There's something we need to fully automate either both of these things, update trackers and blockerbugs, which doesn't exist. Interestingly, there's no fed message. There's no anything. When a new Fedora release happens, you can't be told about that in any kind of programmatic way, unless someone knows something I don't. There's no way you can set something to happen when Fedora 30 beta comes out because nothing tells you that. That's the thing that's missing that we need to get release engineering to do. If we had that, then we could automate these two things and actually a whole bunch of other things that happen when releases come out could get automated. To summarize, I really just wanted to put these things out there and make people aware of these little tasks that are going on in the background and processes you might be dealing with that get done by robots that are quiet and helpful and work nearly all the time. But sometimes they just break down. I mean, stuff happens. Fed message stops forwarding messages. Someone invents a new with compose type and it makes fed fine choke and all of these things stop happening because fed fine starts crashing. I just kind of fix these things as they happen, but again, that's a bus factor of one. I wanted to let people know about them. If I'm off on my desert island and these things start breaking down, these are what they are and they need fixing. Things that indicate that one of these things has broken down, like the validation events in the Wiki just stop appearing or announcement mails forward. They keep appearing, but announcement mails for them aren't being sent, which has happened before. Results from the OpenQA tests stop appearing in results TV, stop appearing in the Wiki. If any of those things happens, it probably means that one of these robots has died. Sad robot. Yeah, so that's actually gone faster than I expected, which is great. At this point, if anyone has any questions, then fire away. Yes. Okay, so the question was, how do we run those robots? Where are they running? They run in Fedora infrastructure and openQA servers. There are two openQA servers, a production and a staging, and most of these robots have production and staging versions, so the production one runs on openQA production server, the staging one runs on openQA staging server. So it's the same basically the machine. If you go to openQA.FedoraProject.org, it's the machine that is running that. For three of these, it makes sense for them to run there. For the others, they could really run anywhere. Something like the Auto Cloud Reporter could run anywhere, because all it needs is to be able to listen to the Auto Cloud messages and send to ResultsDB. That's another reason I run on this machine, because ResultsDB authentication is basically IP address based, and this is a machine that is allowed to submit results to ResultsDB, so it's a sensible place to run them. That's how it's done. And the way the deployment is done is, like I mentioned, it's via Ansible. Yeah. And your second question? Kevin could maybe speak to this. It's one of those things where if you start doing it, it makes you want to fix a bunch of other things. So the whole result process kind of kicks off an initial event, and then lots of things happen, because that's how it should really work. But that's not how it works right now. I mean, stop me if I'm being inaccurate, Kevin. It's like you start thinking, why doesn't this task happen magically? Why doesn't this task happen automatically? How do you do this whole thing? It becomes one of those. There is a lot of doing with FPDC, Aurora Product Definitions Center, but it's not in place yet. Yeah. The answer is basically that it naturally is one small part of a much bigger project, so the problem is getting that much bigger project done. It may be one of those things. I've actually thought of doing that, honestly, but the message currently is every half an hour and sends out a message of the changes, and if we don't get the project done in another year or so, maybe I'll do that. But that's kind of why it's never happened so far. Any more questions? Dominic. What are the plans there to monitor the robots? Are there some end-to-end testing? Yeah, there is. Good question. The question was, do we have plans for monitoring the robots doing end-to-end testing? Apparently, an extremely advanced intelligence system called me, I notice when these things stop happening and fix it, but obviously that is not scalable or sustainable, so yes, we should have automated monitoring for these. There's some to an extent anything that runs in infra is monitored by an ETS, so, I mean, when OpenQA gets monitored when a NIT, it goes down. The tricky thing about automating monitoring of these things, it becomes the kind of the map of monitoring, because if you want to monitor whether the validation event creator is working properly, you kind of need to know the rules about when it creates events. So you're just going to re-write it. I believe, again, Kevin may know more about this than me, there's some kind of plan for generic FedMessage monitoring like things that should happen in response to FedMessages if they don't happen. Do you have some NGS checks for that kind of thing? Yeah, it's basically just a I have not seen it. It does a query, a data worker query that says what was the last validation event or something like that. If there's a, you know, you can say there has to be one of these ever needed or if it works, but that's kind of not... So that would be kind of the way I go about doing it, but yeah, it's honestly, it's just been a I'll get that done sometime kind of the thing, but that would be the way I go about doing it. I'd probably use that and say, because you can, it's basically, you can say, yeah, if there hasn't been a new validation event in two weeks, probably something is wrong, all go ahead has been broken for two weeks is the other, the other case where you don't get a validation event for two weeks. Yeah, that's definitely something that we need to set up for these. Yeah. Do we have more questions? Jack. Auto Cloud. Does it run the test itself or does it monitor the QA? No, it's running test itself. It's a whole kind of self-contained test system. It basically came about, I believe, again, if someone thinks I'm wrong, please correct me, as part of the whole two week atomic thing where they really needed some basic checks of cloud images and we didn't have anything else at that point that could do this. Task Patrol can't do it really, I mean it could, but it couldn't at that time. Open QA, there wasn't a round or just wasn't geared up for doing this kind of thing so it was basically one of those we need this thing right now, someone wrote it in a week kind of things and it was just a very quickly thrown together thing that I believe it runs, it spins up a Docker basically and runs the image it wants to test and makes sure that that actually comes up and then it also does some, it just runs some basic tests on it which are more or less just shell scripts and expects them to pass or fail and then it's very sort of, I want to say simple slash basic slash it gives you a one or zero result basically like for each thing it does it just says for each image it just gives you a pass-fail which is actually a synthesis of about 15 different things it's doing but you just get this image passed or this image failed and that's the result that gets forwarded to result CD it's one of those things, as I mentioned at the time we kind of wish it didn't exist but these things just kind of show up so we'd love if it got folded into the pipeline or got folded into something which is not itself but it's one of those things where it works right now and they'll want to have the cycles to do that yeah so that's why Auto Cloud is around. I honestly haven't looked into it in much detail for a while but I know it's still there still doing something at least as on a few weeks ago because I checked that there was still results showing up for it and there are still results showing up for it so it's there, yeah Oh Dominic, yeah please One of those things, how could I how would you suggest I do that without breaking anything? Sorry, you want to... If I contribute to one of the contributing levels Yes, the more complicated ones have test suites and they actually have CI set up through the Pagger CI integration system so you can actually send the pull request I honestly don't remember off the top of my head which ones this is true of, I think so if you send the pull request for I'll consume or it'll get CI then we'll run the test suite and your commit will be blocked if it doesn't pass some of them are super simple so they don't have test suites but I'm not going to live go through them and remember which doesn't Fedora OpenQA has a test suite as well and I believe it's set up for CI yeah I don't remember I mean something like the Auto Cloud Forwarder I mean it's such one page thing that I wrote and it has worked ever since that I've just never got around to going back and putting a test suite in it but the ones that are significant do have tests and yeah so that's actually just a good point is as I mentioned all of these have links to the repositories the repositories are hosted on Pagger which is just like GitHub more or less so if you want to look at these things, contribute do them, make them better please do, they're open for pull requests and there is documentation as well there's pretty decent explanations of how they work and hopefully should be quite easy to to look at and mess with so please do if you want to end something, please send it along anything else or is everybody done that was just a yawn right yeah I saw how you go just joking alright in that case thanks a lot for coming along everybody