 Okay let's go ahead and get started. I think it's time. Time. Time, yes. Yep. So we're going to talk today about the seat of our infrastructure. Can you guys hear me in the back? Yes. And can you see this tiny print? Mostly. My name is Kevin Finzi and Pierre. We're going to talk about the state of Fedora infrastructure. And we're going to go over a bunch of things that we've done in the previous year since the last flock. At least those highlights of things that we remembered and thought were important and people may not have realized. We're bound to leave things out. There are just so many things that happen. We are bound to have forgotten something or come up with some items that, or not come up with some items that you will definitely know of. So if you want to chime in with something, if we're talking about a particular application or whatnot and you remember something that happened, please chime right in. If you have questions, what we're going, feel free to chime right in. Let's see, anything else? There might be things that you have already heard about this morning. Yes, there might be other talks. We're going to point to a number of other talks. Specific things that we worked on this last year and people are presenting on at this very conference that you may want to go to those talks if you're more interested in that particular subject. So let's see, let's go ahead and get right into it. Does that work? Yes. Okay. So here is an item. How many of you knew that we had an open-stack cloud? A fair number of people. Good. So we've done a bunch of work on the cloud this last year. Just recently we've added some PowerPC64 compute nodes and Copers is going to start using those very soon and that will be nice. They'll prevent any issues with those PowerPC64 builds and they should be a good deal faster, I hope. We've added storage. There was a lot of storage added last year and there's a lot of things going on in Coper. You can find the Coper guys wearing shirts, let's say ask about Coper. So that uses our private cloud to do all of its builds and so forth. So a lot of things going on there. We have a Jenkins instance that is getting more and more projects in it. It does its builds also in the open-stack cloud. And also kind of as a note, we have been transitioning our cloud stuff to Fedora and for cloud.org as a separate domain. We initially had things set up as cloud.fedoraproject.org, but that presented certain problems. Like you couldn't do HSTS headers on all Fedora projects if you had some cloud instances running without certificates and that sort of thing. So we kind of moved it off to its own domain to get around that problem. So there's probably some other stuff there, but the Ansible move. Since the last vlog, we were talking about moving our stuff to Ansible and we completed that. So now we're running all Ansible for our configuration management stuff. We've been helping test upstream Ansible releases, which has been really good. The last couple releases, I've been testing all their release candidates and finding issues. And then they fixed the issues before the actual release, which is great. The Ansible folks have been really responsive to bug reports and whatnot, good stuff. We've also been working on cleaning up our Ansible playbooks and trying to make sure that they're item potent. So you run them 20 times and you get the exact same no changes after the initial setup. We will have a workshop on Friday, I think it is. I'm not exactly sure I'll have the schedule, but Thursday any days. We have an Ansible workshop where we're going to work on... We kind of set up our Ansible infrastructure when Ansible was first created many years ago. So we've been trying to keep up with things, but there are some kind of standards that we haven't done and we haven't kind of moved to the same structure that people expect these days. So we're going to have a workshop on Thursday and try and work on that. Because we have a lot of Ansible playbooks, we're a good example project for other people to look at. All our stuff is open and we like that to be very standard compliant and sort of set up correctly for people to look at. It says Friday morning, but I don't know if that's correct. This over this last year we moved most of our sort of standard infrastructure type stuff to REL7. We have a breakdown here of what our OS releases are. A student of mine will notice that we have Fedor22 listed here, which recently went into life. So there are some instances there that we still need to clean up. Not too many, but just a few. There's jinks and jinkins instances. There are some build instances in our private cloud that various people have been using that we need to have them transition to do overings, etc. Is that due to incompatibility or just time? Just time. So for example, we provide in our cloud built systems for the twisted Python folks. So they can run twisted tests on Fedor22, 23, 24, etc. So I've set them up with Fedor24 instances and they've turned those on, but they haven't turned off the builds on the 22 instances yet so they can disable them. So on the Fedora nodes there are only time constraints. On the REL6 we have both times and compatibility. And that's why we still have 26 REL6 instances. And that's a good DLS urgent because obviously REL6 is going to be supported for quite a bit longer. But we'd like to move those along. And there'll be more about that in some later slides. When I put these together, there was 525 instances we had in Ansible Managed. That isn't all the instances though because a lot of the cloud ones are just done ad hoc by people and things like that. So we have quite a few instances out there. Some of them are spread out quite a bit, like our proxy servers are spread out all over the world. And so some parts of our Ansible Managed structure are pretty quick. Like you can configure something over 10 hosts and if they're all in the same data center, it's great. But if you're going across our proxy servers which are in 8 data centers all over the world, that takes a long time. Some of them are slower from the management hosts, et cetera, et cetera. Also, we moved to BODY2 and HyperKittyMailand3 allowed us to get rid of some of the REL6 instances and move them to 7 or Fedora over the past year. Another thing we've done since the last fog is here and here and there. So we brought the secondary architectures into our main infrastructure. In the past, the ARM, RPC, and S390 cogs and their build systems, the secondary architectures were all just sort of done by those people who worked on those. And as you might imagine, they didn't have a whole lot of time to apply updates regularly or configure things in the same way that the primary was configured or moved to REL7 quickly, et cetera. So we moved all those into our Ansible management infrastructure. So now they are managed exactly the same way the primary cogs is managed. They use the same Ansible templates so we know that they're actually set up the same way, the same OSs, they follow the same update patterns we do, all that stuff. So that has been really nice. We've run into problems in the past where we have an update on the primary cogs and it doesn't filter out to them. And then they have problems building because they need to encode you backage, et cetera, et cetera. So that's been a really good one. So looking forward from the system inside of things, this is a slide for Patrick here since he works on our private cloud. But we have a lot of cloud plans over the next year or so or two. We want to move update to the latest Red Hat OpenStack platform. Right now we're running five, which is pretty old, and we'd like to move to eight or nine, whichever is eight now, but nine. That's a huge amount of work, right? It is, but... I don't know how compatible those releases are. They aren't very. So basically what it will entail is spinning up a new cloud on the new version with a few compute nodes, and then migrating everything over to it and adding the compute nodes as you migrate. So it's a lot of work, but I think it'll be worth it. And once we're at that level, the later OpenStack does support upgrades and things like that. But does somebody document what happens with that besides going to get logged and ansible? Does that actually get written down anywhere? As far as what? Well, because it's a useful bit of documentation for anybody else who ever has to do the same thing later. Sure. Because I looked at this stuff, I looked at the ansible playbooks and stuff just to see how I might do stuff in my own setups. Right. Yeah. So the question was about the private clouds and upgrading and setting up. Right now our parents have to OpenStack five. As an ansible playbook it was installed with, but that was three years ago? Something like that. And things have changed a lot. The installer is different. Configuration is different. So basically once we move to the new setup, it's going to be a different playbook. It's going to be a whole new setup. But it should be in there, definitely. So that's what we want to do. We want to add some ARH 64 and R&B 7 instances. Either via VRT on ARM, on ARH 64, or via Ironic, which is the install. Treat a real machine as a cloud instance type of thing. We want to get epsilon authentication. And we want to open up, we want to set up things so that more people can use our OpenStack essentially. Or things that are useful for Fedora. And that's one of the things we're going to talk about at the infrastructure workshop tomorrow is laying out how we want to set that up. But basically it should be fairly easy for us to provide authentication and some amount of resources to any contributor. So that they can log into the OpenStack and spin up an instance and test something and do whatever it is they need to do easily. Now we're going to have to figure out what resources we can provide to everybody and what makes sense for people to run there, if we want people to run long term things, how we want to secure that. All these questions remain to be solved. But I think that could be something that's very useful for folks in the coming years. And then there's work on setting up an OpenShift instance on top of our private cloud. Again, for contributors to be able to spin up applications and test things. The cloud working group folks are working on that. Another thing we want to try and do over the next year is move the stuff that is on Fedora hosted.org. Which is kind of our old source project repository. And we want to try and move those things to Begir. It's going to be a lot of work and there's going to be things that we have to change and there's going to be things that don't want to move. And there's going to be things that want to move somewhere else and all sorts of stuff like that. We want to give a long ramp here. We don't want to just say, oh, we're turning off Fedora hosted tomorrow. Get your stuff moved. We want to give everybody the opportunity to move, migrate stuff. And we're hoping that we can get, in the last couple of months, I've been trying to talk to the larger projects on Fedora hosted and get them to file issues in Begir for things they need or workflow they need. And with mixed success, you know, they've provided some stuff. We do have an importer now or exporter or whatever, yes. And DeoTorizo. And hopefully attachments are working there. Cool. Very good. So we now have an importer that will take a track project, all the issues from it and move them over to Begir issues so you don't lose any history that way. But of course, get repos are easy to move. But some people have particular workflows in their track projects or what not. So if you have a Fedora hosted project and you have an issue moving to Begir or there's some problem or workflow, please do file an issue of Begir and we'll see what we can do. There is a question about the project, the few projects that are not using Git. There are a few projects Fedora hosted supported, Mercurial, SVN, and Bizarre. All those are convertible again. Yes, problem solved. And there's a couple of folks I talked to that were looking to move to GitHub or somewhere else just because the rest of their projects are there or their contributor base is there or whatever and that's certainly valid. Begir has been released for six, seven months now. Yeah, yeah. So it's usable to anyone. What I mean is, the track is very easy to use for many users, because it comes from the translation team. And Begir's collaboration, how do you do things in German stuff? I think there's some of that, but I think it's also improved a lot in the last few months. There's documentation. Your Begir project can have documentation now. It's another docs repo, which will render so you can actually describe what you want users to do, how you want them to file issues. There's templates for issues now. So I think there's a lot of that is there now. But again, user interface is difficult. I mean, we're always looking to improve it. Some people like track and think it's great, but other people despise it. So I mean, it's just, yeah, there's a, you do the best you can. Well, you can't support it forever. I think we're behind, our track is pretty behind. Well, it's on a supported branch, but it is. And again, track is a project they were supposed to have their 1.2 release out three, four months ago, something like that. And it just isn't released. So if you have comments or ideas on how to improve tracks, then there's the Begir, I mean, there's definitely room for improvement and room for ideas are conveyed. How much work is there available to do things in Begir? That makes no sense. How many developers of Begir are there right now that are? Well, interestingly, Begir is actually one of the most successful project that we are running in the infrastructure from the NAPI question point of view and from contributor point of view. It's, well, far beyond on the number of commits compared to the other people. But it's one of the projects that is attracting the most newcomers to the project. And the person behind you is a perfect example. He's new to the community. He's new to Fedora for a few months. And he has returned the Begir importer and he has a few patches in Begir itself. And I have stuff I want to do too. I'm just, my question is, with your file issues that you have with Begir that prevent you from moving from track, what chance do you have of having them fixed really? I mean, we try to get the ticket list over the 100. There's people who are going to ask that question at least. Right. I mean, we can only do the best we can do. Yeah, it's hard to say. It would be good. The things that we need to do the hosted move, I think we're going to set up a label. And so we can at least know what those issues are and maybe prioritize them as best we can with the developers we have. Ah, yes, containers. So we've been talking about moving into the container world now that things are sort of a little more stable and a little more useful now that we have a container build system, which Adam was talking about earlier this morning. I'm looking at maybe setting up a mirror list container. I think that might be a good grin for us. But we're going to have to figure out exactly how to do that and how, you know, what part of it is going to be in the container and what part of it is going to be outside the container. The mirror list is our application that returns a list of mirrors to users. So if you do DNF update, blah, blah, blah, you're querying the mirror list server and it's giving you back this meta link of servers and checksums and so forth. And so it's a very critical app. It's something that we want to have running all the time. We don't want to disrupt it, et cetera. It's one of the few applications that basically don't need an address. People are reporting issues before not just... Right, if there's ever a problem and we hear from users before our monitor even goes off, you know, something happens and some user shows up, hey, I can't, I can't do DNF update. So it's obviously very critical, but it's also something that's very isolated. It takes a mirror list, which is this blob of data, and it just serves that data out. That's all it does. So it's a good candidate for a container. So we're looking at maybe setting that up, leveraging the container build system, and also if we do this, I mean, it's going to have other advantages, like if you, for whatever reason, were operating in an isolated environment or you had a particular environment and you needed to do your own mirror list server, it would be very easy to just take this container, give it your own data, and then get your own meta links or whatever back. We're going to talk about this in the infrastructure workshop and sort of how we're going to do it. We've been talking for a number of years about getting on the database replication train. We just need to buckle down and do it. I've been working on it some this year. Right now our database, we have a number of database servers, and you know, we back them up and whatnot, but they're not highly available. So that's why UCS do outages periodically where we have to update things. If the database server gets rebooted, then there's that outage once down. We want to move that to high availability type setup where we don't have to have those outages. We can just fail over reboot, et cetera. There's a proposal out there. It's not had wide discussion yet. It was brought up to FESCO though. Moving to secondary architectures. Right now the secondary architectures of their own Cochise, S390 is its own Cochise, AR64, PowerPC. There's a proposal to just move all those builds into the primary Cochise so that they just all happen at the same time. And that would simplify a lot of stuff for research and it really wouldn't have that much impact on people because right now if you do a build on the primary and it fails on PowerPC or something like that, then there's a bug. Somebody has to look into it. Somebody has to fix it. Somebody has to submit a new build. So that would just make this process quicker and easier for everybody. It's going to be discussed more probably in the next few weeks. And if we do that, AR64 would probably be the first one to move in. So what would it mean if the secondary architectures failed? It would mean that the build would fail so that you would say, okay, my build failed. And either it would be something obvious that you could just fix or you would go, why is that failing on S390? And you would talk to the secondary architect folks and say, why is this failing on S390? They'd say, oh, here's a patch. Would it be like the whole build failed? Yes. Well, it depends. There's some talk about that. Right now the way it would work is the whole build would fail but there's some talk about changing Koji so that it doesn't do that. So that it would still fail the build but it would continue doing all. Right now, if any architecture fails, Koji does, it's failed, it cancels the other builds. Others talk about making it fail the build still but continue all the builds to their conclusion so that you would know exactly which architectures it actually failed on instead of just one. You had a question? Could I hop out? Even now, sometimes I'm building packages that take hours. Now I wait for R to do dates. So all of the secondary architectures are faster than our primary Arm V7 architecture. So it shouldn't be a problem? Well, it shouldn't increase any of the speed. However, also in that vein, we should be making Arm V7 builds a lot faster soon where we have an ARF 64 set of hardware that can run Arm V7 VMs on it. So we're going to move those builders to VMs on ARF 64 and they should be, I think it's three to four to five times faster. So that should speed it, yeah. Everyone's looking for that. So that should happen relatively soon but that's the slowest. PowerPC S390 and ARF 64 are all faster than Arm V7. So it should, once we land that, it should speed up everything. I don't think that speed, I have some critical issues and then some architecture I don't know nothing about is blocking me from pushing critical update Fedora and nobody from the architecture team is available for help. Should I exclude the arm just to like... That's a perfectly valid question. That's what we'd have to discuss. This is supposed to be brought up on the developers for discussion. So yeah, maybe there should be a procedure like that. So like if it's a security update and you have to get it out quickly and S390 doesn't build or whatever, maybe that is a valid thing to do, exclude Archit, do that build and then revert it so that they can fix it and push out another build but all the other architectures can actually be updated. So this is... There's a lot about shifting work from secondary arches to primary maintenance of these packages. Well, it's not actually shifting that... If some build fails, that journey needs to be great in order to fix their build. So there's some statistic on this. And again, well, I don't have all the data here but there's a statistic on this. It's like 99% of packages in primary build fine on secondary. There's no issue. And when it is an issue, it's either something that the secondary person has to come up with a patch for or do something invasive, something, some work. So in the vast majority of cases, it doesn't matter. My question is, wouldn't we give extra resources for packages like the ARM machines we have in cloud where people could basically... Theoretically, we have that. So we have that for ARM but not for... That's a good, perfectly valid question. I agree that having those resources would be good. We could do PPC now in the cloud because we have a compute there but we still need S390 and ARF64. I think the largest problem that you get in this situation is you're trying to build a no-arch package. It picks a random architecture that simply does not support OCaml or some Java or something. Right. You're not really no-arch, but you are because now you add a new package you have to know which arch is to exclude. And of course, then we break no-arch, which is not something we really want. It's been a long, long-term problem. I don't know what the good answer is for that. Anyway, there's going to be a lot more discussion on this on the list and Peter has a big proposal and so forth. All right. What else have we got? Monitoring automation we're working on this year. Scooch is going to work on. Right now we use Nagios just because it sucks the least. There's certainly lots of other options out there now. But right now our Nagios system is very manual, so when we create a new application or instance or whatever, it depends on whoever is adding that to make sure that they monitor all the right things and add the hosts and so forth. Since we now have Ansible, there's no reason to do this manually anymore. This should be completely automated. When you add a new host and Ansible to do something, it should get monitored. Hopefully this year we'll have that. I think we're on to the application stuff. On the application side, Kevin said that we have had quite a number of changes this year. We basically cannot really list them all here because there are just too many applications and too many changes. What we try to do here is to highlight some of these that are big enough. Some of these that we consider to be important enough and some that you might not have heard about. These are applications that we have developed or changed. We worked that are important changes. We missed the FNN once. We missed the FNN right there. Important changes that you might not have heard about. That one is probably something that... I'll turn it over to what I just said because that's probably something that you already know and I've heard about. Mailman 3 has released a new major release that was long overdue, which is Mailman 3. We are moving away from the old Piper Mail archive interface where emails were listed by months and if you had a thread going over two months then you had to go from one page to the other to actually be able to read the entire thread. There's something a little bit more. I'm going to say Web 2.0 where you actually get one thread entirely. You can reply online directly. That's basically Hyperkitty. Hyperkitty is developed by Auronien who is here in the room. One of the ideas of it is to be able to bridge forums with mailing lists so you can interact with the mailing list as if you were just looking at the forum. One of the key features which I see in there is that you can actually go to Hyperkitty, send an email to a mailing list that you're not subscribed to. You won't get any emails and you're still able to actually send your emails, get the reply, reply to the thread and you don't have to follow any of the other message communication that's going on the list and you're not going to receive any of the emails that's going on the list, but you still get your message through and you still can access the replies. I think that was one of the key features of it. The advantage is that nowadays the migration has been completed so there is no more mainland tube running. Well, there is one non-federalistic. Even federalistic is now management-free and hyperkitty-based. There is a talk about that so if you want to learn about hyperkitty. It's at 4.30 in the afternoon. Voila, 4.30 in the afternoon with Auronien. I'm trying to play me. Yeah, sure. So the main list is private because I have some main list from so-called chain search for the bathroom one and the main list is private so you cannot access the path to that guide that's normal but you cannot access nothing because it's private so you cannot register. So the question is... The question is how can you subscribe to a mailing list that is private since you cannot see the mailing list because it's private. Yeah, so there are two web interfaces for mainland and indeed the out-of-guiding chain and there is a web interface for administration called Posterius and you may not have seen the link to that thing. It's... I can't remember the graphic it's that main, it's that sort of guys and there you have the way to... you have a way to subscribe to this and administrate this. So the answer for the movie here you can subscribe so there are two interface to Mainman 3 one is the archive and that's hyperkitty and one is the admin and that's Posterius. You cannot do that via hyperkitty because that's the archive and the archive is private but you can do that via Posterius. So another small application that we wrote is called MDAPI and MDAPI is a very simple application. The idea is we have the UM repositories and the UM repositories are consisted either of a small file or small SQLI database and that's when you DNF update you see a download at the beginning and basically it's fetching these files but if you want to actually get the information from these files you either need to use the DNF API and then download and cache the information locally and then when there's a new roll of updates you need to retrieve this information or you can use MDAPI and the idea of it is basically it's a very small API that just explores what is present on the SQLI database from the UM repositories. So it goes you can ask for what are the dependencies of a package you can ask what is the changelog of a package you can ask what are the files present in a package you can ask version release you can ask the set packages you can use the set package to go back to the source package this kind of thing this kind of meta-information that is present on the UM repository which you may want to have access to and this is public and this is using the the roll height we might need to configure it's using the testing repository it's using the stable repositories and the release repositories so if you ask for information about the kernel this thing if it doesn't find it in update testing it's going to look at updates if it doesn't find it in updates it's going to look at release so eventually you should get an answer it might take a little bit longer but it's actually very very fast so it's a you don't notice that it's actually going through three database to give you the answer but you get everything there and it returns to a nice JSON blob so you can integrate that with your application and play around with it so are there any plans to release latest repo are there any plans to use the latest repo so the Koji repo is in there as well so you can when you go on the I didn't put the URL here it's on apps.federaproject.org if you go in there you have a nice ASCII page we have a few links and one of them is the branches link and the branches basically list the different repos that are available or so it will say Koji it would say EPL7, EPL6, EPL5 one other thing to note here is that it also emits fed messages so it'll say MDAPI noticed Koji built the database updated and then you can see that it updated or Fedora 25 updates updated questions is there a repo query that we can use that will use the state is there a repo query equivalent that uses that did something which I have in my mind for a little bit of time but I still haven't played with it because I really would like to get the iterative aspects of repo query building the dependency trees and the kind of thing using in the API it's pretty fun to write so the answer is no still open in the open if someone wants to pick it up but otherwise I might just get around to it at one point so the good part about it is that we actually started dogfooding it Fedora packages is what's running at apps.fedoraproject.org and it's this application that we have which is not really meant for contributors although it is used for contributors but it's also used for people outside of Fedora community to see which packages are in Fedora which version are they what are the contact people what are the bug reports what are the patches because it's at least the content of the Git repo and it was getting fairly slow the architecture was a little bit weak one of the problem that it had was it was actually using the sqlite database from you and it had a lock so that it would update the database and then the lock would not be released and then the front end would be blocked by this lock and or the front end would lock the database and then the backend would not be able to update the old stuff so it was a little bit of a mess playing with that so we get rid of all the YUM integration parts and we just use mdapi nowadays and it uses cache the information from mdapi so it's very snappy and it refresh it's cache or it invalidates cache when mdapi sends a message saying that I edited there was a new Koji repo built that says so mdapi is not unable to say well there is a new Koji repo built it also says what changed in that repo build so which are the package that were impacting by this update so in the federal packages just look at these messages and see there was a new kernel so I'm just invalidating the cache for the kernel and when someone goes to the page and asks for the kernel it's going to create mdapi retrieving for cache so that when the next person is coming they will eat the cache directly it's much more stable than it used to we have had much less tickets open about project being outdated there is still a little bit of some time small issues one of them is coming from the Ruby world where we have one package that's present from two different sources and basically according to who is looking at which page first the information is going to be displayed in one way or the other way but it's a little bit of a corner case and there is no real good answer for that one because it's a tricky one and so yeah it's message based it's really unique we do get bugs on it actually but regularly but most of them because people think that this is our bugzilla so they just report packaging issues on that so we actually have a template in there saying like thank you for your ticket but this is not the right place please to bugzilla.trilat.com okay I'm going to go a bit faster now mod is something which I wanted to bring forward mostly because it's pretty cool I mean this is the new interface for the meeting logs so when you do meetings on RSE you use zbot and you start meeting and end meetings the logs there is html and text logs that are published mod retrieves them stores them presents them in a nice way it's using javascript so that's one of our few applications that's mostly only javascript I think and one of the cool thing about it is it has absolutely nothing to do with the federal engineering team it was made by someone from the community that came up with us and was like yeah I'm making this cool app for this and we were like okay go for it and we just run it and he wrote it he's maintaining it and we're just running it and reporting him for the bugs it's pretty cool mirror manager too so that's something which you don't which you're all using that you don't see pretty much it was an old application that's back in the days from when Fedora core meltdown one of the router on a release day in the reddit data center and reddit not being so happy about hardware failure so we started to run mirrors and there was how do we direct people to these mirrors so that's mirror manager so it's something from Fedora core 4 or 3 it was really an old application so it needed to be refreshed that's mirror manager 2 it's live you haven't seen it it's working so that's good the good thing about it is we brought back a couple of features that had been removed over time one of them is the distribution of the mirrors over the world so you have a map of the world and you can see where the mirrors are there are a few stats which are integrated in it one of them is this one so this is all the work of Adrienne Reaver who is a redditor but has nothing to do with the Fedora infrastructure he also likes mirror manager he runs the mirror so it's just working on it and so this is an idea of how many mirrors are so this is for Rohide X8664 and every single this is the number of mirrors that is up to date when the Chrome job run so what you can see is we're probably here at the Rohide update so we have 20 mirrors that are up to date and then at the next round we have 50, 70, 78, 85 and then we have a new update it's every 4 hours yeah something like that and what we see at the top is we have some mirrors that are always outdated so the question becomes of course is that always the same mirrors so do we have always about 10 mirrors at the top that are just always outdated and since they are always outdated people don't use them because mirrorless is not going to give you a mirror that's outdated so the next thing that we should do is look at these 10 mirrors and see if they are constants of the time but it does give us an idea of how long it takes for when we push an update to Rohide how long it takes to reach all of our mirrors and we can see that it it does take about that's 14 yeah that's 4 hours so in 8 hours we have most of our mirrors outdated we are working on it yeah tips here is like 10 minutes sure what's the relation between the mirror manager and the mirror manager app so mirror manager is the application that people running mirrors are going to interact with that's the place where they declare the mirrors and mirror lists is that is using the data from your manager so there are three components there is a front end mirror manager where people registered there is the back end that takes the information from your manager, scroll them, see if they are updated, generate a data files that is then given to the mirror lists to be served to the users so that's the triangle that's being built here then we have Bassette so Bassette is you have not heard about the name but you have and that's spamming so release the hand then we have Bassette to fight spamming that's Patrick's baby the idea is when you register to fast it goes through a machine learning process that tries to figure out if your spammer are not based on the information that we have and then you sign the CLA and then it has more information and then you start doing something and then it blocks you because you behave badly there is going to be a talk about that more of the great details please attend then there is Pegger well that's my baby so it's already been mentioned it's our new forge, it's python based it got a very very big lift face by Ryan here so all the credits go to him if you like the new UI if you don't like the old one it's me that has been a tremendous help to get the project started I think because it makes the project more attractive and also to new contributors there was one project that we put a slide off and that somehow got removed we also started reworking our FMN system so FMN is the Federal Message Notification Service so it's the place where you can go it's on apps.ferralproject.org and it's the place where you can go and exactly point out which kind of notification you want to get from our infrastructure and you can not only say which notification but you can say where and so the idea is like you can get I want to have a nice notification when I do a build on Koji that is successful and I want to get sit by email when the build is failed and then because then I can archive the email so I can go back to the links and see what's going wrong so these are the kind of the granularity that FMN is allowing and we did an entire rewrite of the backends we just changed the way the architecture of the backends worked and we are about three times faster we can proceed about 100 messages per minutes we were about 30 messages per minutes before something like that and we deployed that last week Monday and on Tuesday we had the mass branching and I think we recovered from the mass branching in about half a day while it took us about two days before what's the bottleneck here? the bottleneck here is computing for each message that goes on the bus we need to compute all the filters of all the rules of every user in the database to figure out if they want to use if they want to be notified of this message and where? so that could be done in parallel so it's parallelized to some extent so yeah, we now have using a queuing system and then different workers that are doing the computation and then one worker at the bank that's just doing the IO sending to IRC and emails some message topics used to be ignored by FMN I'm assuming due to performance reasons so would it be possible to enable that now? if there is there is a need for it definitely so there are a few coppers repo that are blacklisted right now that's the RubyGEM and the PyPy repo that are entirely rebuilding the entire RubyGEM ecosystem and the entire PyPy ecosystem this is from before the rewrite we kept the change in there, I'm actually curious to remove that and see how SwarmPyFMN gets or not I'd be interested in the recent notifications about here being said in the culture so that's maybe something we can look into the only thing when we deployed the new FMN we did not clear out the blacklisted copper repo was about I'm not sure if there is actually anyone that's really interesting to get notification about every single built in copper of PyPy or RubyGEM if there is, I might be to turn them back on but if there is no one on the receiving end I don't see why we should just put Claude on the system for the fun of it I don't have anything on my machine I don't need to eat her at all and it's for your quality run all the time ago there was a promise for Android applications and it kind of is there a sign that says you can put notifications from this request of mine uh-oh we're looking at making that's very different than me working progress the source code is there yeah one of the changes that's coming is we're actually adding a new backend and that's going to be a server-sense event backend so an event-source server that is going to be able to send notifications so you the idea is to integrate that in herbs in herbs you're going to have a feed so it's a live refresh of notification and you're going to be able to say on your herbs well I want to be notified on an email, on IRC on my herbs page about these and these subjects and the event-source server could be used for other kind of notifications including the GNOME desktop notification that worked or used to work so one of the ideas is that we really keep all the computation of who wants to know about what in one place instead of trying to spreading that over different applications I'm completely running out of time because I still have the commencement part so I'm going to go very quickly in there. Ypsilon is getting a new pretty awesome feature which is the OpenID connect support and that's again Patrick's work is part of the OpenID committee the OpenID foundation is the redact representative of the OpenID foundation he signs autograph at the end of the talk one of the things so OpenID connect is basically OS2 on storage fixing one of the problems of OS2 it will allow us to do cross-app authentication so things like when you're on herbs you will be able to interact with FEDOCAL without actually going to FEDOCAL or when you're on BODY or some new interface you will be able to interact with package DB or BODY in one place and that should also work much much easier for every CLI program that we want to work with FAS3 is going to be released and I see two people here that are just really hard working on it right now we basically should have it running on staging by the end of the week pretty much after that we're going to break staging and then we're going to fix staging and then we're going to break prod and hopefully fix prod I'll let you go through the support, the changes here one of the things is it's also going to help us with everything that's token-based authentication so every CLI tool should benefit from that as well package DB that's something that has been mentioned we have added namespacing to package DB so that it supports other components and RPMs Docker has been mentioned with Maximilian Adam Miller's talk earlier in the modularity group to interact with package DB but there might be a package DB3 around the corner for that Fedora herbs so the idea is to make it easier for new contributors to find what's going on in our community and how to reach us apparently our old timers do like our small dark corners so we want to put some way for newcomers to join us in the dark side there is going to be a FAS on Friday on it so feel free to join Pager has also new things coming up private repos so I'm not quite sure yet if we want to allow it on Pager.io but if you want to run your own Pager instance or if your company wants to run it then you should be able to activate private repo versus public repo we are working to be able to mirror codes so when you push to Pager it will be able to mirror to GitHub for example or to Federalistic that exists we have Pager importer which allows us to bring your project from say GitHub or Federalistic to Pager itself and something that has been asked and that we need to figure out is to get meaningless for projects so that's with Memento it should be much easier to do and that's just something which we need to get around package.federalproject.org so far it's a nice cgit interface you can browse the repository you can access the look aside cache and nowadays in staging you can do slash Pager and you can fork and you can see which project you're managing, you can see the content of the git repo, you can fork, you can pull request and eventually you will be able to match the thing with it is you can fork, you can pull request, you cannot do tickets this does not replace Paxilla this does not replace packagedb so every scl or the group management or the user management remains in packagedb and is synced to Pager when we sync the gits and everything and it does provide hooks so things like web hooks are actually provided by Pager and we well it's not deployed yet on staging but that's something which we really want to deploy at one point it has git hook so you should be able to get commit by emails it should just don't do you want it? it's working go ahead can you do some by project that you can find the star of the heavily for some of my lines? we could so we're working on CI integration and right now the only CI that we have is Jenkins but we could see something with Pager doing CI integration with something like OpenQ or Taskatron and then when you open a pull request you get so pull request can be flags and you can get a flag from you have 3 warnings or 4 errors well there's also cache or you could you could integrate cache integration with the pull request because the problem with CI is you need to rebuild everything that depends upon you to see if they broke as well that's where it gets very interesting what do you want hardware? tips yeah that's your baby I'm just going to try and do a lightning talk if I can figure out how to sign up for it I'm just trying to speed up mirroring a bit we talked about mirror manager 2 we talked about mirror list but there is also the server side of it and that's how mirrors report that they are up to date to us or if they want to and that's quick federal mirrors and that's tips baby and it's basically we make both of them have been running the script for like every 10 minutes and that's how much 6 seconds if there's no changes that's how little load it brings to both the servers and the host running it I need one mirror manager so I already talked about deferment so that's one list to do wiki is getting upgraded wiki is getting upgraded it's moving to it's going to move to open ID so you're going to have the same page as for every other it's moving from from MySQL to PostSQL it's moving from real 6 to real 7 and there is a question of will we get better mobile support with the new version that's something with the question mark we need to test is there going to be changes to make deferment at all or okay and we have last 5 minutes but do you have any questions so in the API is Python 3 only one of the latest service returned by Ovenia to be able to do plus plus outside Zbot outside IRC to give a cookie batch is Python 3 only so we are towards slowly but we are moving towards few of our application that are Python 3 one of the problem is getting the Python 3 stack in EPL 7 to get decent support the characterization might actually help on that so something to see I have a question it's there I don't know what the status is can I have more minutes it is in the same place it has been where the stuff is all there it's just very dependent on my 3 knob over time there is not a lot of that so that's the only thing that's really stopping us from working we have to be able there it's just there is a few last integration things that need to happen that probably won't happen in less time if someone sets their razors to annoying and probably have regular razors and even then so it's dependent on you see the progress it's been in progress for a long time but yeah it's it's a slow progress what should be done in use to be any other questions thanks well thanks thank you for coming