 Two microphones, one of them is that one. Yeah, I'll hold that one, because he's going first. Yeah, yeah, yeah, yeah. So you're going to... How do I... Yeah, it's just doesn't have any... It's full review. Hello. Oh. Yeah, just don't tap a ring against it. Yeah, I mean, because you're making... Those calls were questions, 40 minutes total, including the questions. And we'll show you a sign with those 10 minutes left. Yeah, sure. Sure, I can raise the 10-minute twice. So the first time I raised 10 minutes, we're 10 minutes in. Second time is 10 minutes left. Yeah, absolutely. Okay, cool. I'm awake. So you have five minutes left. I'm asking you, please. What time? Well, that's what I'm going to... So why are you going to... I'm going to cover all of them, please. Yeah, I might have to go and get some food in a minute. Oh, here, here, here. Let that rest out. No, no, I'll just... I'm just going to go and get some food. It's sometimes... Yeah, yeah, yeah, yeah. Especially that one. Two more minutes. Yeah, I know. And for the guys who want to charge their devices, there is charging sockets in the desk, so look closer. Yeah, yeah. They should be there. And we'll start in one minute. We'll be starting the next presentation. Okay. I think we're at time to start the next talk. And I'm... Okay, so one thing, please, if you are leaving talks early, try and close the doors as quietly as possible. I'm failing with my speech today, I do apologize. But let's get on with Ralph Bean and Adam Miller. Thank you. Thank you. Can you hear me? Is the mic on? Can you hear me? So welcome to Data Fedora Infrastructure. We're going to divide it into two sections. Generally, I'm going to talk about the applications and services side of things, and then I'm going to hand it off to Adam, who will talk about the release engineering side of things. Let me get through that. If you were at Denise and Adam's keynote this morning, you'll know that the release engineering, I think it's probably the... Matt and Denise? Oh, Denise and Adam. Denise and Matt's keynote this morning. You'll know that release engineering is a real priority, so I think that's probably the more interesting part of this talk, looking forwards about how we're going to retool a lot of that workflow. So broadly, this is what we'll talk about today. I'll take the stuff on the one column. Adam will take the other. So Fedora Infrastructure. Before getting into applications and services, our environments just some numbers and some projects that were done on the back end that people don't see. In the last year, we finished our migration from Puppet to Ansible, which makes us much more flexible and able to deploy and deliver services faster. We use that same process as an opportunity to convert the majority of our hosts from Rails 6 and upgrading them to Rails 7, and here you see a breakdown. We have 516 hosts, 405 of them are Rails 7. We have a few Rails 6 hangers on, and some of those will stay for a while, just because like the Jenkins builder, we need to keep one on Rails 6 there. And then we have a number of Fedora hosts in our private cloud, our OpenStack instance. Another project on the back end over the last year has been SE Linux. The number of hosts we had SE Linux enforcing was abysmal in the past, and we've made great progress in bringing that up, and so that requires extra work along the way to get all the policies right for our specific services. So we're making progress there, and then we migrated our OpenStack instance. The original instance was set up by hand and not kept in configuration management, and so we've upgraded to IceHouse, finishing that in the last year, and then Ansible has all that, so we're able to then upgrade to the next version in a smoother way. So among applications and services, divide them into two groups, the first being the rewrites and enhancements to pre-existing apps. This is the one that I'm probably most excited about that was carried out in the last year, is the deployment of BODY 2, and I hope that people like it and enjoy it. The user interface is much more fresh and hopefully enjoyable to use, but a lot of the big improvements are actually in the code quality itself. The pre-existing version in BODY 1 was written on top of Turbogears 1, which is an old, way outdated web framework that we didn't get any sort of upstream support for anymore. And there were no tests or test suite associated with it, so in order to make a change to it, you had to worry that you weren't going to break the whole Fedora updates release pipeline. So now we have a very large and comprehensive test suite, regression test suite for that, so we can actually distribute some of the responsibility for that service amongst the team. More of us can hack on it, not just the one original author, which is a huge win for us. Mirror Manager 2 was also rewritten in the last year. It was also on Turbogears 1, which we needed to get off of, so we're going to go back to the task. And here's a graph of some of the statistics that we can get out of Mirror Manager 2 now that we can be more flexible with it. It's a thing that manages all of our mirrors. And I think this graph is really interesting if you haven't seen it. The things in green are the number of mirrors that we have that are synced freshly with the data that we've just released, and so you can see these blips where the purple extends down. That's when we have finished mashing a new set of repositories and pushed it out, and it takes time for all the mirrors to get to the top in red. We have a whole bunch of mirrors that are registered with us, but they have data that is older than anyone would ever want to use, and it's not clear if they're just not, you know, scraping any new data at all, even though they're reporting, so we have insights into that now that we can be flexible with Mirror Manager 2. Fedora Packages is another application that a lot of people I think like using but are really frustrated with because of the data corruption issues on the back end and caches being out of sync. But the back end was rewritten almost entirely to hopefully have a more consistent data model and be a more reliable service. It's kind of a meta app across all of our other apps, so it's kind of a single point of entry to go and find out any information you want about any package in Fedora, and this is just a diagram of that back end rewrite that involves using FedMessage to push data freshly to the cache, so it should be available before you even request it instead of requesting it the first time, having to wait for the cache to load and now it should be fast the first time you go and look for it. One of the last major changes to pre-existing systems, and this is also something that you would not be able to see at all on the front end, unless you're using FedPackage to push things who people have seen this morning, we added namespacing both to diskit, our Gitalite instance and to packagedb2, which is paving the way for the future of being able to ship not just RPMs but Docker images and having Fedora contributors in the same way that they commit to spec files now, so that will be rolling out in the coming year but it was more work than we expected actually to add namespacing to all that because the assumption built into all the software is just that RPMs are the thing that we do. Yes. It will eventually be removed and we will, the prerequisite for us doing that is we need to patch FedPackage and RPackage in order to intelligently rewrite your .git config to point to the new correct place, so developers won't have to do anything but the alias will go away. So, new services. Pagger. If people have used Pagger, I hope that you like it. We get a lot of positive feedback about it. So, it's a forge, kind of like GitHub or GitLab, but it's written in Python and both the number of features that neither of those have. For instance, all of your issues are kept in a separate Git repository, so it enhances the kind of portability of the whole thing where in GitHub you run the risk of having, you can always get your data out of your code, but your issues are locked into their service and so if they ever went the way of source forage, the whole open source ecosystem would be in a tight spot. So, Pagger is our answer to that. In the coming year, we hope to use it to replace some of the other things that we have around, like Track and Fedora hosted. We have a vague roadmap for that but that is on the table. Yeah, and it's very cool. Just last week we put out this new UI for it, so if people haven't seen it, Hyperkitty, also in the last year, that's the web archiver, the web interface to Mailman 3, which Aurelian Bumpard has been working really hard at for the last few years, so we rolled that out this year and also I hope people use and enjoy that. I'd add just for our team, it's nice because it provides a REST API where the old Mailman web archiver had nothing of the sort, which means that we can begin thinking about integrating mailing list activity and also more on that later. This is a very small service, MDAPI, stands for Metadata API. It's a microservice that we released this year that very simply just provides a JSON API on top of all of the young repositories that we push and release. So if you ever wanted to know what provides something else and you wanted to do it from, say, JavaScript before, I don't know what you would do, but now you can just call out to this JSON API to get that data. It's very quick and easy. It was previously, it had to manage all these local young repositories on disk and they would collide and get corrupted, so now we have a nice, stable point of access for all that data. Koshy, my team didn't directly work on it, but we helped with the deployment and figuring out how to put it in place if people like Koshy, it's really cool. It's a continuous integration system for RPMs in Koji, which allows us to detect, fail to build from source errors before the mass rebuilds come along, and I know a lot of people have been using it and it sings its praises all day long, so. And Taskatron also, my team didn't work directly on the development, but we've been helping with the deployment and coordinating it, so Taskatron is really cool if you haven't seen it. It has a limited number of checks right now that it runs. It runs automated actions in response to events in our infrastructure. Right now just doing RPM lint, depth check, and upgrade path, but in the future we're hoping to add a whole variety of tasks to that to make it more generally useful for blocking and gating things in our release pipeline. So upcoming projects. We have, in the coming year, a need to revamp our entire monitoring infrastructure. We use Nagios and CollectD for that, but the way that we maintain it is extremely manual and time intensive, so when we want to roll out a new service, there's a checklist of things that we need to do, and those can take, you know, in the worst cases, a week or two, even if the code is already written for the project, so we want to try and get that overhead down to a very small number, so we can be more agile and produce faster than before. A consequence of that process of rolling out a new service so hard, excuse me, a consequence of it being so hard to roll out a new service is that it is then tempting for our developers to instead split our services intelligently amongst like a suite of services, to instead bolt functionality onto old existing apps. So if you think about Bodey, for instance, one of the reasons it took so long for that rewrite to happen was it was such a large, complicated app that involved so many different pieces of functionality, and if our infrastructure was not as agile as it could have been, we couldn't peel off those components, managing build route overrides should be its own service. It doesn't really have anything to do with managing updates. Monitoring initiative, right next to that DevOps tooling is something that we'll continue to be working on. We have a pretty good set of those things around, but I think there's a lot more we can do, be able to do stuff faster. Statistics and analysis, thank you. Statistics and analysis, we have a lot of data sources that we combed through manually and write scripts in a one-off fashion like from Matt's keynote today, but we are in the process of, we've built the prototype of a service that can produce those kind of statistics on demand and it's just a matter of then hammering that into place and ensuring that it can do all the things we want it to do to use it. Fed message, our message bus will be slightly expanded this year and getting two really nice new data sources that we've been working at for a long time, we've been working on it for a long time on the BookZilla one. Taskatron expansion, as well I mentioned, talking about those checks that it does right now, RPM, depth check, and upgrade path, we'd like to create a diskit style model for Taskatron checks so that people, packages can write custom checks for their own packages or families of packages and then submit those because the QA team can't possibly project everything that would need to be tested about everything in the huge variety in Fedora, so opening up that to the packaging group will be a good project. Front-end unification, if you've looked at all of our apps, you'll notice that simply because they were written at different times in the course of the 10-year evolution of Fedora infrastructure, they have a patchwork feel sometimes where they look very different from one another, so especially over the summer we're going to work just on a front-end initiative to try and give those a consistent theme. It is unlikely we will be able to hit every service but we'll hit enough of them, I think, so we can improve that experience for new contributors in particular. Fast 3 is another topic. We have Fast 2, the Fedora account system and it is beginning to show its age so Fast 3 has almost completed being rewritten but it's a matter of finalizing the API transition problems that we'll have and then deploying it and getting it out of the door. And lastly, Fedora Hubs also mentioned this morning is a new project that we've been designing over the last year and development on that will really, I think, begin in earnest in the spring of this year in the coming months. Here we have a mock-up from Mo Duffy. It's going to be really cool. I think a portal, like an intranet for Fedora project developers and intranet is a dirty word, I know. Here it doesn't like that word at all but it primarily is trying to serve two bodies of people. New contributors, in particular non-technical contributors that don't have any sort of experience using IRC or using mailing lists and who look at Fedora as this huge pile of subteams and communities but have no idea where to fit themselves into that. With Fedora Hubs the gist is that they'll be able to go to this web portal and then be dropped immediately into IRC with a ZNC bouncer on the back end so they have the history. They can connect with people, they can see what people are doing with a fed message stream that shows the activity of various teams and subteams so I'm very excited about that. That's all I have from a patient services side. I'm going to talk about release engineering infrastructure by show of hands who went to Dennis Gilmour's talk earlier about release engineering. I'm not going to go into a whole lot of detail about what we've been up to and where we're going in mass with all the work that's going on. I'm going to touch on a little bit of it but I'm mostly going to talk about the motivations for what we're doing some of the infrastructure pieces that are in place and that are being worked on to enable future work. But before I do that I kind of want to talk about oh that got chopped off, that's cool. I kind of want to talk about just Fedora where we are, where we come from, where we're going thinking about Fedora as an operating system and thinking about some of the ideas that keep getting talked about, the modularization and the rings and things like that and when you think about operating systems there's kind of this weird split and this question on the first line I'm pretty sure I stole from Alex Larson but where does the operating system end and the application begin and we kind of need to answer that question when we move into this world of modularization and we decide what ends up in kind of an application tier versus in the base operating system and kind of imagine a world where those things aren't tightly coupled and examples of those are Docker, Rocket and the other one is the new app and whatever might come next like whatever the new hotness ends up being and I think examples like really good examples of working functions of that are Android and IOS and well Firefox OS is dead now as of yesterday I think so bad example, Chrome OS but the idea is these particular operating systems and their implementation you have an application space and it runs and you install your application and it's updated independently and there might need to be cache regenerations and optimizations for the run times and those kinds of things as updates get applied for the layers on top but you have these things that are loosely coupled and they can transition between different versions I mean for example you can take an Android application you can write it and submit it to the build system and then people running Android 4.4, 5.1 and 6.0 can all install that application and so basically what happens is there's a cache regen or a reoptimization and you can even change back end run times, they switched from Dalvik to art and those kinds of things this can be done, it has been done and just kind of thinking about that in the context of Fedora so that kind of comes to the Fedora rings and I know the rings idea has been talked about a lot with the next but I wanted to talk a little bit about Fedora and how it works and how it works and how it works and so this is loosely an idea of where that's going to go and in this diagram has been floating around for a long time I stole it from Matthew Miller borrowed I don't know repurposed whatever fair use this worked at some it had a better I don't know it wasn't falling off the bottom of the page it's bugging me, my OCD is kind of triggered right now but anyways the idea is that if we can decouple these things we can allow different groups within Fedora to cater to their own desires and their own needs so folks in the server work group could potentially do something different or on a different life cycle or different things like that but I think it's a lot more cadence than those in the Fedora workstation and for people who want to run third-party applications who don't want to consume content through the Fedora official repositories but want to do it in an external context we can try to provide them a mechanism that allows that to happen in a more I guess in a way we can do a lot of different things so Ralph mentioned before this concept of namespace diskit that allows for RPMs and docker containers we also want to leverage that same concept for the future because we don't know what's going to come next so if we have this concept where we existed in a world where RPMs were the only artifact we would effectively roll them together in different image formats that would be distributed but in the future that's not going to be the case we're going to have new delivery mechanisms for sets of RPMs those sets of RPMs would be packed into containers whether that be OS tree based or docker based or XCG app but it's going to become different release engineering needs to cater to that and we need to find a way to react more rapidly and handle the ability to enable this but still keep sanity because the end of the day we have to ship something that is testable and that is reproducible and that can be verified and we're not going to just like send you out into the wild west and every time you do an update everything crashes I'm not going to say that will never happen our QA team is amazing but they're chasing a bullet train constantly and there's only so much that can be done but we try. So tooling today we have Koji and Koji is RPM centric and Koji is amazing at what it does and it was created in a time in which RPMs were the only artifact and this was before the cloud this was before live CDs like Koji predates a lot of those technologies and it has actually done a very good job of adapting to that and Image Factory talk. All right, awesome. It was a great talk. It's a great tool. Image Factory integration live CD creator Pungie. So Pungie is kind of like this weird kind of lie thing because Pungie 3 and Pungie 4 share a name but they share no code. So today we're using Pungie and tomorrow we're going to use Pungie but this is just a great tool. So we're going to start with Lorax Bode Bode 1 versus Bode 2 what I call the Wild West which is Koper. And Koper is very powerful and it does a lot of great things and I think at least for me I use it as kind of like a test dev space for things I'm trying to get officially into Fedora while I'm just trying to hash today, some things that are in process, some things have been done, some things are in-flight. Koji.o, which is still being designed, but one of the concepts around it is content generator centric, and what I think is amazing about that, and for those who followed recently with content generators that got added into the Koji 1.x line, is its ability to have this metadata format defined, enabled build type, such that we can have different back ends, and it makes the system more flexible, makes it more flexible. Live media creator as opposed to live CD creator, Dennis went into a lot of detail about this, but there was just a lot of legacy cruft with the old tooling that produced our live CDs, and we were moving up. Secondary architecture is in the mainline Koji, that's kind of more of an infrastructure side aspect of release engineering, instead of having these secondary Koji's that have to live in exist in weird places, it would kind of fall under the umbrella of primary Koji, it's just that if the builds for secondary architectures were to fail, it wouldn't fail the build for the primary architectures, it would just send notification as it only does. So, Pungie 4, and this is kind of where I was talking about, so Pungie 4 is going to enable us to, for those who may have followed along, Adam Wimson wrote up a really, really great kind of summary of what this is and what it means for us, in the past composes of raw hide did not match composes that became test candidates. But you wrote to the mailing list. Yeah, all right. Oh, I'm sorry, so Adam Wimson said that he's actually writing a blog post about this in detail, and it's currently in draft format, but to look for it on Planet Up Fedora or his website, happy assassin. So, that is going to change it, and that's actually going to enable a lot of things, and it makes our daily compose process and our nightly compose and our development process match more of what ends up becoming the finalized Fedora product, the thing that gets shipped out to everybody that allows us to have these things look more one-to-one. Fedora Atomic, two week releases, this is in production now, we are doing two week releases, we've had six of them, five, six of them. We want to continue to iterate on this. There's a lot of aspects of this process that we kind of had to shoehorn in, and we want to clean up and remove a lot of technical debt and enable this. And then trying to again enable and find new ways to integrate with the Wild West, and like right now we already have the ability to just DNF Cobra enable things, and that's great, and we want to kind of figure out where that's going to fit into the release engineering pipeline in the future, and if we can have a way to allow copers to be that loosely coupled build environment but still be have gated tests and those kind of things on them, a lot of that's floating in the air. So, two week atomic releases, and some of these slides and some of these diagrams I actually talked about at Flock, and the reason I'm still talking about them is some of them are still being worked on. Other ones had to be rewritten, and I'll kind of discuss that. This one I will again have to give credit to Matt Miller for putting this together. But it's our two week release cycle, and a lot of this is in place now. A lot of it exists, but we need to get better at it. So, the build is done, but we want to get better at it. We want to allow for builds to happen more rapidly. Right now they happen nightly. We want them to be more reactive. Ralph mentioned earlier Product Definition Center. You did not mention it? Okay. Dennis talked about it in the talk the hour before this. So, Product Definition Center is a system that will be a place where we store information about composes, and it will allow us to query them and be reactive based on things that happen with them. I think I have a slide about it in the future. And if I don't I have failed you all. But we want to get to a point where the builds happen based on actual changes. So, we will know what goes into an atomic image, and when a compose happens we can trigger tests and things on that, or rebuild components of the system that are necessary based on changes in the environment. So, basically we can take the manifest of what goes into an atomic image. We can query Product Definition Center. We can then say, okay, well, we saw a Fed message come across saying that a new component hit Koji. It got built. So, now we need to rebuild these images and rebuild these OS trees so that we have those updates. Test right now, that is autocloud, which I will talk about in a second. The test component was bootstrapped basically from zero. It was kind of invented off to the side in the sake of time. It was one of those things where we wanted to get this thing out and have it rolling forward. A decent amount of that back end is going to be reworked. We are going to work with the footer QA team to standardize on Taskatron and get this in so that it is part of the full product pipeline, and we are complying with the test case in that we are not reinventing wheels as we continue forward. The release piece of this right now is a Python script. You can go look at it. It is really bad. Don't think ill of me when you look at it, if you do. One thing that we are doing in this space that is an effort that has already started is we are creating a library. It is called Fedora Lib Rel Eng, and that is set up to allow all of the different components that are kind of reusable, all of the processes and things in Fedora release engineering to put them into a library such that all of the scripts in the release engineering repository should eventually just be slim wrappers around this set of functions, and we are passing inputs into them to get the outputs that we want. And just kind of take that one further, as we need to in the future, the back end of those API points should be able to be disparate systems or fire off some ansible task out into the infrastructure and allow it to be more distributed and more parallel. So as we go on we are continuing to iterate to make that release piece better. But right now when the atomic two-week release goes out, we upload the images to the mirror. The website is updated, which I have something on in a little bit, and it is automatically uploaded to Amazon EC2 via FedImage. And that is part of the previous present. So we have the links to the web and then the email announcement. The email announcement we also want to improve on. We need to do some introspection pieces into the OS tree so we can actually do disks between them and provide a useful change log and those kinds of things. So we are continuing to iterate on this, but a lot of this is in place. BugU, which I am pretty sure I am talking about in a second, the bug filer. We have an automated bug filer in the works right now. That will actually, as the tests run and find problems, it will automatically file bugs in bugzilla. So here is Auto Cloud. This is what Auto Cloud looks like at the face value. It just kind of gives us a nice web view of tasks that have run. They have passed or failed. Cushal DOS and Cyan did a lot of really great work to enable this and they did it in record time. I don't think they slept for about a month. We had this in no time. This tests, this automatically tests QCOW 2 cloud images, the Vagrant, both Libvert and Vbox, and then Adam Williamson enabled the Auto or Open QA ISO installer. So we actually also test the ISO installer of the atomic 2 week image before it goes out. This is the view of the website. It's been updated. Ralph did the work on this and we now are able to present the updated images as soon as they land. And there is a nice little kind of note in the middle there that is generated for you that tells you how old the images are. At the time that I took the snapshot, the images were only five days old. So we are doing, we took something that was a six week release cycle and we shoved it down to two weeks and we are trying to get to a point where we can actually iterate it on it faster to where the subset of Fedora packages that go into atomic can be effectively whitelisted in the Bode update system. And what that would mean is these set of packages have automated tests in Taskatron such that we trust them to where if these automated tests pass, when they pass, that particular package will be marked stable in Bode. We are not there yet. I'm not going to lie to you and say that we have this in place today. Thank you. But it's getting there. So product definition center spoke for repository and API for storing querying product metadata. So effectively a product is the different editions in Fedora nomenclature. Single source of truth. So right now if you want to kind of get this kind of information, you have to scrape logs and do kinds of all kinds of weird queries. Yeah, Adam Williamson knows this intimately, and it is probably one of his, yeah. Ralph did the work on this. It's amazing. PDC.FedoraProject.org. It's up. It's live today. You can go in and look at and query information about it. Here's a quick diagram of kind of what goes into it. So we have the scripts and the humans and we can do queries. PDC updater audit script, which is great. So one of the things and one of the concerns in the past was that if something goes wrong or if a Fed message gets dropped off the edge of the planet, what, you know, how will we catch that? And so PDC updater will do the run and actually provide us eventual consistency and eventual basically means every day. So every day it will be ensured to be consistent. And even in the event that something happens, it can start over and retry and pick up and those kinds of things. So. Docker layered images. This is a big thing. I talked about this at Flock. The reason I'm still talking about it is because in November Docker registry V2 happened. So the back end of the system had to be rewritten because it was V1. And Docker as of 1.10 or 1.11 is going to completely drop support for V1 registry. So we just said if we're going to do this net new in Fedora space, why even pay attention to the old stuff? So this had to be rewritten. But it is powered on top of OpenShift. It is a built system. We're using their source to image, build pipeline, their image streams, those kinds of things. In the future this will do automatic rebuilds of images. So if you are to be a maintainer, let's say for example Cockpit is doing a Docker layered image inside of this Fedora space and there's a CVE for the base image, the system will automatically rebuild that layered image for us in the product space. So that way users out in Fedora space are able to constantly get new software updates without packages have to go in and doing layered rebuilds. And for anybody who's maintained Docker images will probably be familiar with that amount of pain. So OSBS and Container Build upstreams are fantastic. If anybody is interested in participating in any of those, also part of the OSBS ecosystem is a tool called Atomic Reactor, which is a Docker build environment that allows you to do introspection into container builds. It's very very cool. We're also going to be working on pulp integration to do SDN scale out of our registry. Fedora will have a registry for two reasons. One, to be our official distribution point of these artifacts and two, to allow base image uploads rapidly. For Docker official base images you actually have to load the tar ball into Git and do a pull request against them and a human has to go in and sign off and it's a whole thing and they don't let us do it more than once a week. So for raw hide we want every time a raw hide build happens to land. Quick overview of the service. The endpoint would be the Fedora layered image maintainers. They would have their Docker file and their app init scripts and docs and whatever and disk it. That's going to do a Fed package container build which will fire up in Koji. Koji will send out to the, oh, my line went the wrong way on that one. Sorry. It will actually go out to the Opinion V3 and atomic reactor in OSBS which will then upload directly into the registry which is where the users and contributors will consume from. In the future, in the magical future, we want to have PDC integration, we want to have Taskatron tasks tied to this so that we can gate and make sure that these images are functional and passed before we actually ship them. Fed message handlers so that we can be more reactive and other things. So, what's next? Containers. We're going to support alternate container formats. As was mentioned in the image factory talk, somebody from CoreS is going to contribute the ability to do rocket containers in Koji. Run C, freight agent, the workstation based on project atomic tech, something OS tree based which was also talked about in Dennis Gilmour's talk, a little bit more at length. New acule apps, we want to kind of find a way to cater to allowing multi-container applications to be built at the same time in the build system so that you don't have to submit each piece one by one. And new hotness. I put in new hotness just as a vague thing. The automated build pipeline is something we're working on. I also have a dream of being able to have more release engineering tasks be self-service. There's a lot of things coming in request of release engineering in the track instances. And I want to have that be self-service. And my idea for that right now is Ansible Tower. And you have this kind of web portal front end where if you're in the proper fast group you have things presented to you such that you can just kind of click and run those tasks on your behalf and it would provide us with good logging and the ability to do FedMessage handling those kinds of things. On the back end of that being Ansible Base we're actually working with internal Red Hat P&T DevOps RCM to kind of try and find places where automated tooling can be shared so that we're not reinventing these kinds of things and that we also have more people involved in the development of the entire processes and those sorts of things. Which I don't know, I haven't been in release engineering in Fedora for that long. I've been here for about nine, ten months. I don't know if that's ever been done before but it is. I'm looking forward to it moving forward. For the most part that's kind of what release engineering is up to on the infrastructure standpoint and kind of plans where we're going to try to make all this better, more approachable and more adaptable for new technologies in the future. Any questions? Yes. We do not but we have our own bootstrap theme that we're going to be standardizing on called Bootstrap Fedora based on Bootstrap pattern fly but it's not the same tech. Our designers, Ryan Lurch just thought that pattern fly didn't match the Fedora theme. Yeah. So we tend to self assign? The question is how is the apps and services team organized? So if you file a bug on a particular system in our infrastructure and you don't get a response, how should you kind of pursue to escalate it? So we have a very loose and informal organization of assignment and it's largely kind of voluntary where people take up different systems that they find themselves primarily responsible for but all of us at least nominally can fix any bug in any system. That's the goal, right? So you could ask anybody to look into it. And I would say that it changes also like sometimes like when Bodi1 was released a number of us worked exclusively on Bodi for a month there but then afterwards then some of us moved to other directions and we had a lot of different apps. So the question was what kind of monitoring do we have in place? If something were to die like a hard disk, what's kind of how long does it take for something to react? We probably have better perspective on the daily ops of that. I never have to deal directly with anything related to disks and hardware but we have other members of the team who do that. We use Nagios for a lot of things. We have centralized logging infrastructure where people can detect some things prematurely that way and actively watch those logs. For what it's worth also collect DE to monitor system performance and that's all public you can, everyone can go and look at it. So the question was is Fedora working on reproducible builds and if so are working with the Debian folks working on reproducible builds. I think that's a vocabulary overload. What we mean by that is that if you have a compose of things or if you have a set of inputs you would get the same set of outputs in terms of build artifacts to images and those kinds of things making sure that the process can create reproducible components like binary ABI reproducibility those kinds of things that is not something that we're currently tracking or chasing but we, well I shouldn't say we're not tracking it. There are a number of people in the Fedora world that plan to implement that in CoG or anything to the best of my knowledge Mike, do you have a okay, all right, so Mike and everybody half of the CoG development team I mainly asked him because if there was something happening in the build system that was going to support that I assumed he would know. Thank you. Okay we're out of time people who ask questions if you don't currently have a scarf and would like one come please come get one we have three of them to give out to question askers and I know a few of you already have one because you've previously asked questions but thank you for your time.