 Signs, I will show you, you know, 10 minutes, 5 minutes of questions, please. Thank you. That's my moment, I need to start. Hello everyone, I'm here to talk to you about the fact that I have a lot of friends here. I'm here to talk to you about the fact that I have a lot of friends here. Hi, I'm here to speak to you about the fact that I have a lot of friends here. I'm here to talk to you about the fact that I have a lot of friends here. So I think it's time to start, so the next presentation is going to be in my Eleanor, but to figure out who will be the second main host, I have to ask for just one moment please, right, close the application and I actually have to pass it all back, so please do it, okay, so I don't think that for the next year, I think it's your turn now. No, that is not at all what I thought that was, okay, sorry. Okay, and the other room they had like a little thing that showed my slides, software is weird, okay, there we go, yeah, no, we're good. My name is Adam Miller, I work on Flora Engineering, try to do build type tooling and automation. One of the things that I work on as much as I possibly can, which is admittedly not nearly as much as I had hoped or always hoped to have time to do all the things, it's Flora Two Week Atomic Host, so Atomic Host, actually sorry before I get too far into it, how is that really going to be, there we go, okay, so today's topics, we're going to start with the background, I want to first define what release engineering is, just to kind of given, I guess level set on why we do some of the things that we, the way we do, because it doesn't always seem to be the fastest way to accomplish things, but there's a method to the madness. How Flora releases have traditionally worked on the back end, how, you know, the six month cycle traditionally happens and why we wanted to break out Atomic Host from that. We're going to talk about the Flora Atomic Two Week release goals as they were initially set out, and then we'll kind of discuss how much we've accomplished and where we want to go, and well I'll probably along the way kind of give little context of where we have not lived up to what we had hoped and we'll always try and do better. So background, so just really quick, release engineering is reproducible, audible, definable, and deliverable software, that's it in that shell, you can read the definition if you want. Also I want to quickly defend what a compose is for the sake of context, I'm going to talk about it later, but a compose in the Flora release engineering land is a collection of primitive builds, which for Fedora is effectively RPMs, and then the creation of deliverables, which is going to be our ISOs, our various virtualization, our cloud images, OCI based images, and a combination of these things such that it can then be released as a unit, so that when the release happens we then have effectively what the images are that users will use, are installable, those kind of things, and the sources that created them so that they can be correlated. So if Fedora release in the past has been time based, we would do a six month process, there's a process by which community members submit new features, there's a review process, there's a project planner, the way that all goes through. The proposed features then go to Fedora engineering steering committee for review, they're accepted, sometimes they're sent back for more discussion, those kinds of things to try and keep the release to release change in a way that is both rapid enough to continue to be innovative and to adopt new technologies, but as well, you know, not try to break the world for the users. There's the nightly rawhide composites, rawhide is the develop branch, this is basically the summation of everything that has built in the last 24 hours in the Koji build system, if it built it ships in rawhide. There is discussions around ways to kind of gait some of that, do some initial testing, and there's a lot of groundwork that's been done there, just things that need to be enabled to allow that to kind of move forward, but that is a goal eventually to make rawhide usable, but for the sake of understanding what rawhide is at its nature, it's what has built. So there's a branching scenario that we go through, and the reason we call it branching is because all the Fedora source content, what I mean by source content is going to be the spec files for the source RPMs, and then a reference to the checksum of the actual tarball for the source that goes into that. Those things are versioned, and right now, once upon a time, it was CVS. There was nothing better. Times were tough. So the branch happens because we actually create a new branch, and disk it, and every package that goes into the distribution has a branch, and each branch correlates to a release. They are numbered, just like the releases are, except for rawhide, being the development branch, is the master branch in disk it. So branching happens. We branch for the next release of Fedora. The composers then begin for a branch. We start doing composers so that we have something that is similar to what would eventually come released, and that can be tested as a release unit. We have milestones, which are going to be our alpha, beta, or final. There are freeze periods, those kinds of things, so that changes coming in during those freeze periods are only changes to fixed bugs that must be resolved to comply with criteria for those milestones. The changes and the milestones themselves, what is acceptable going into that is compliant with criteria defined by the QE team. So Fedora, sorry. I don't know how synonymous or how interchangeable QE and QA are for most people. I use it in my head a lot. So the Fedora QA team defines these, and we make sure to again try to allow the influx of rapid innovation on technology, but within a way that doesn't break everybody. So updates policy, criteria for each Fedora release type. There is a specific defined set of these for each release, whether it be Fedora 26, 27, and then Fedora 26, so we'll just say 26 for sake of argument because 25 is current stable. There's a set of release for branch free bodhi branch because once we branch the builds and composers happen, but it doesn't necessarily set up in bodhi yet, and it probably should have put something in here about bodhi. Bodhi is the update system and gating mechanism that we use to distribute content to users. It's where CVE data, change logs, those kinds of things can go. It's got an RSS feed. It's where the repositories are actually created and then set to new. Branch free bodhi and then free beta, and then once we kind of get off towards the tail end, that's where bodhi is introduced and there's changes to be sent through there, and beta and pre-release and then stable. So a set of criteria for each of these and there's kind of a change process that must be gone through. Updates happen for non-end of life Fedora releases. We don't update things. They're dead. General availability. So once we have a release candidate that passes the QE slash QA release criteria for a pre-release, then it is promoted to a final release and we get that out the door. So goals for two-week atomic and rounding back to the title of the talk. We wanted to move Fedora to the post outside of the six-month release. We wanted it to be able to iterate faster as a deliverable than the rest of the distribution. We wanted to find a way that updated components that are brought into the distribution could be included in something that is deliverable and tested before it is delivered. We wanted to create a fully automated pipeline for the entire process, which we really thought we'd have by now. But real life happens. So the goal is that OS tree builds would be based on changes to RPM. So there would be something that would keep track of all the RPMs that are supposed to be inside of an OS tree in the event that one of those RPMs gets updated. It should then initiate a rebuild of the OS tree. We have kind of a compromise for this at the moment. There is a Bode component that will basically kick off an OS tree rebuild and the OS tree tooling is intelligent enough to know whether or not it needs to be rebuilt. So if it doesn't need it, it just kind of no ops. But we were hoping to get it built into the pipeline so that and this kind of goes on the tail end of some other initiatives to allow for basically everything to follow a similar pattern, at least in the raw hide space. So that we're not having to do things on a time base, we're able to do them on a change base. But effectively have the influx of RPM changes result in an OS tree rebuild a compose with them be based with them be triggered in the event of that change requirement. So that we could get a new set of deliverables new set of images and those kinds of things that could then be tested from there. Once the compose is finished, the automated QA process, which is currently auto cloud, which I'll mention in a minute. But the goal would be to move this into Taskatron. So it's integrated with all of other tooling results, DB, those kinds of things that we can actually produce and publish in a refined or just as part of our process that we can reproduce it and deliver the various information that is essential or potentially interesting to the community for that. And I also I want to kind of like preface when I say the community, I generally mean the user community, the entire community as a whole comprises, you know, of the release engineering group of the QA group of the, you know, atomic working group. There's not some like weird cabal that sits in a room. Yeah. So we wanted to have a testing dashboard web UI that's publicly available. And that kind of rolls into some of the results to be in the, you know, the Taskatron stuff. We had a, which I'll show in a minute, we had a kind of an improvised intermediate auto cloud thing that does satisfy that. We wanted to consume test results to determine a release. We want to have automated content signing and release to mirrors so that we could effectively never bother with this. Don't put a person on a keyboard. Nobody pushes buttons unless something goes wrong. And then presentation to users. We wanted a website update such that the website would automatically consume messages of change in status for a new release. That information could then be automatically presented to the users so that way, again, nobody's having to mainly go in and put, you know, human keys on keyboards, which actually does happen with the traditional six month release cycle. However, that also normally comes with new website design, new logos, those kinds of things. And so for that component of it, there is always going to be some churn, but we wanted to at least have some way to programmatically have some code in the website that would just the, at least the textual information, the links, the, you know, some amount of information around the release could be auto updated and put into the website. And then a change log email sent to various mailing lists that care. So Fedora cloud, the project atomic upstream, you know, project atomic announced at least those kinds of things. This was the diagram of the goal, the original desire. So basically we have the trigger on OS tree change, which caused the build, the build would create all of our different artifacts. Those things would then go over to test food, image, gain, test advisory test. Things would go down here and then we, oh, I forgot to put it as a bullet point before. I'm sorry. We don't have auto bug filing. So in the event of a failure here, we wanted the ability to have the system automatically file a bug against the thing that failed. From there, things we mark as tests, sorry, pass or fail for, for the compose, then we would have an automated release. Right now it would be, you know, time-based two weeks. We may or may not get to a point where we do that more or less frequently, but then from there, you know, have the test, test results, the web-based dashboard and present the release to, to the user. So this is just basically the workflow. And this was a design. I saw this graphic from Matt Miller, you know, Fedora project leader. He, he has way better inkscape skills than I do. So he, he mocked us up. It's in our, our wiki document for the initial desire and goals and change and some of the, some of the documentation for what we've done, where we gone, those sorts of things, which is in, I have the final page of my slide as a bunch of references. You can go check all this stuff out. So progress so far. We have a new dedicated atomic host nightly compose. So there is something that is doing dedicated compose just for the atomic piece and then the cloud images that come out of it. That is separate from the rest of the distribution. We have automatic generation of OSRIs for Bode updates, which I mentioned. So for example, if you have an installed atomic host and you do an atomic, atomic host upgrade, it's going to pull from something that was recently built, which that is planning to change and not to do some future, which is something that was actually desired to do quite some time ago. And from our standpoint, so it was a, it was a request. And from the release engineering standpoint, we just, we just failed to deliver. But there is somebody working on it now. That change is coming. We're going to move to where the update actually mirrors in step with the two week release. And then the constantly updated stream will become an opt-in. So if people want to stay with what is the release deliverable and their OS tree will always match what the latest OS tree for the download image will be, they will, you know, that's the cadence we'll follow. Again, if there is some kind of critical CVE, we will obviously do a new, a new build and release for that, those kinds of things. But so auto cloud for fully automated testing. So auto cloud was kind of a band aid for lack of a better term. And I don't mean that in the sense that it was, you know, duct tape and bubble gum and didn't have a lot of engineering effort put into it. But the desire was to have this fully automated system that could integrate with open stack, could do libvert and vagrant both for libvert and virtual box. And we needed it up in like two weeks. And as many people know, who do software development, trying to integrate something like that into an existing system into an existing workflow is often more work than it is to just bootstrap it off to the side. So the goal was to bootstrap it off to the side for a quick, quick win. And then we would migrate later. So we had the quick win, quick when exists, it's out there, it's publicly available, I'll show some screenshots of it and all the magic that goes along with it. And then in the future, we're going to integrate that with Taskatron. So again, we have all of it kind of unified with what the initial goal was. Two weeks on a close release. So we do have this, that does exist. We accomplished that piece of it. It's just not as fully automated and to end as we had hoped as a solution. So it's mostly automated. We need to finish the auto signing work. Once auto signing is done, we will actually have a fully automated release process. Not quite to the original design, but it will be fully automated release and then we'll continue iterating on it and continue to get better. So we did actually accomplish a decent auto deliverable. I don't mean to sound down on it. So this is out the atomic community, the atomic working groups done a lot of really good stuff here around the testing, the validation, those kinds of things. So the auto bug reporting, again, I forgot to mention this, there is an auto bug reporting tool that will report bugs based on test failures. And by nature of what it is, it is something that is constantly being iterated on to improve on its ability to detect failures and correlate those failures to their source and then upload their upload relevant information to the bug report. So auto cloud, auto cloud is composed based. So if you go to the web page, it's admin.footerproject.org slash auto cloud. And it will show an overall status of whether you know I'm complete running those kind of things and then an overview of how many artifacts from that composed pass or fail. And when I say artifacts, I mean like deliverable content, images, those sorts of things. Oh, I just realized I need a screenshot of auto QA because that's cool as crap. Anyway, maybe I'll switch the way around. Which one? The auto cloud ISO test. Or I'm saying open QA. There's three problems, three hard problems for your science. Anyways, so so these are each of each of the images that come out of that compose. So we have our vagrant, Libbert vagrant buffer box and Q-Cat to Q-Cat to itself is tested directly with with Libbert. And then we have our output from those that can be viewed over here. And that's just a textual like raw log of all the tests that was ran that were ran and their output. And then just kind of a quick brief, you know, yep. That's a bug. Yeah. No, but I mean, yeah, I probably should have pointed out, sorry, the family base is that supposed to be base cloud image versus atomic cloud image. Because not that long ago, the through our cloud working group transitioned to become the footer atomic working group, because we as a collective whole are aiming to target the atomic host and project atomic as a, you know, container ecosystem of technologies moving forward, because we believe that is where cloud in general is headed as a as a general technology solution to, to the problem of what people generally think of as cloud and running services at my favorite marketing term, web scale. So that kind of thing. And we're aiming for that. So as a side effect of that, various things have changed and not everything is updated. So yeah, sorry, there used to be, we'll get there. So the reason that so for atomic is the compose name that used to be Fedora cloud. And then the family was base image for cloud banks and atomic for. So this is the message that I pulled this from. We have a service called data gripper. It stores all the fed messages for those who don't know fed message as a message bus that the entire Fedora infrastructure integrates into and sends messages about all sorts of fancy things. And you can just query them whatever you want in data gripper. It has a very pretty UI. So this is the message of the JSON data that gets sent by auto cloud. So we can, oh no, I'm sorry, I pulled the wrong screenshot. Okay, well, so auto cloud will send a message to the bus and data gripper will keep it. So every two weeks, when the release engineering script gets ran, and it has to be run by hand right now because somebody has to sign it and we need to type in passwords sign keys for signing keys. This is a resulting output. And what the resulting output is, is it says what our compose is the various images, what their release image URL, their name, and then that is consumed by the website. So this is, we've just released this. These different data points for each deliverable image or ISO sent out the website auto updates. So you'll notice right here, this message atomic host is built every two weeks or so. These are latest official atomic host images produced two days ago. So down here in the page, I tried to include more information but the font was just tiny because if you get down here, there's a considerable amount of information. Down here is a little bit more info and the URLs and things update automatically. And then there's like a little graphic over here. But this is kind of the big thing is you can very quickly get an idea of how recent the images you're about to download are. We recently added the thing, I think Dusty might have it in this. Did you do the thing where it waits 24 hours? The website. Somebody did. Well, somebody did it. I bet Patrick did it. Anyways, moral of the story is we had this weird issue because of the nature of ISOs and cloud images. We would push to the master mirrors and then because of the way that mirror manager works, there was no real intelligence for the meta link to redirect you to know for a fact that the mirror was redirecting you to had the updated images. So the website now waits 24 hours because the longest check-in time we allow for public mirrors is 24 hours before presenting the new images. Something we need to improve on is the email that we release because the email goes out with the master mirror push, but we'll get there. Yeah, so we advertise this to the users every two weeks that we have this new thing and all the deliverables and everything. So future plans. What is this doing? Stop it. Now. Okay, future plans still whatever. Yeah, sure. I mean, ask a question. It's not gonna, I don't know how to interact with it, but feel free. This is fun. Try it. I have no idea. I've never done this before. So future plans automated signing. Most of the groundwork for this is done. Patrick, not even attempt to butcher his last name. Last name abbreviated you. Patrick, fantastic human being, brilliant developer, member of the footer engineering group. He created an auto-signing mechanism so we can actually send signing requests. And I think it's like based on Kerberos authorities and what's the hardware thing? TPM? I think it's TPM that locks it to a specific piece of hardware. Yeah, yeah. Anyways, it's really cool. But he has a thing called Robo-signatory. And Robo-signatory allows us this in-signing request and then it signs on our behalf. So automated signing so that way we can have the fully automated release, which is bullet point two because we want to have this only need eyes on or hands on if it fails. We want Fedora Infra Automator, which is something that was originally spawned from Relinge Automation. But after talking with the infrastructure group, this is something that may or may not make its way into a more general use. But for the sake of Relinge Automation, we want to allow the ability to trigger actions. Anything you can express in an Ansible Playbook. We want to be able to trigger that based on a Fed message. So if a Fed message comes in, has a payload of information, that payload of information is injected into an Ansible Playbook as variables, that Playbook will trigger some action within the infrastructure. And that's a whole other change plan. But for just the upstream project that it's based on, it's called Loofable. So it's a, yeah. We want to pull Kubernetes out of the OS tree. So right now Kubernetes is baked in. We want to pull it out and have it run as a system container. Number one, because this will bring the image, this will again kind of decouple even more of the base system from the application run time. But also it will allow for, number one, allow for those two things to iterate independently. And number two, allow for users who are interested in alternative container orchestration frameworks to pursue those. Yes. Have you had it? No. Not yet. Message is in a container orchestrator. People are using it. People ask me for instructions on how to set up form, which I posted. Yeah. So apparently yes. Mostly, actually, mostly the initial interest in that was to be able to easily alternate or pick and choose between Kubernetes and OpenShift for people who want the full fledged dev experience or those who have familiarity with just upstream Kube and want to do that. That was kind of the initial. Yeah, that was the initial. I wasn't aware of an interest in swarm. That's cool though. I mean, whatever, bring it. Also, actually, I take it back. Somebody was talking about deus for a little while, but that's also Kubernetes. But still, it's another distro. Yeah. I mean, we could do, that's actually really interesting for container, system containerized cloud provider agents and those kind of things. VM agents, I don't know, insert your virtualization solution of choice in there. So yeah, that's good. So as kind of off note, not initially as part of the original plan, but a lot of that sort of stuff, we have discussed wanting to put in the system containers and trying to utilize system containers more. It's just that the Kubernetes stack was kind of the initial, like let's get this thing working plan. Yeah, and that's kind of the decoupling. Yeah. So that's another thing is phase one is get this done. We would like to actually pull the container runtime out because what if, you know, in the future, somebody wanted to run Rocket underneath their Kubernetes or they wanted to run Cryo as their backend. I mean, as Fedora, we try to be as unimpending as possible for what do what? Yeah, why not? And what was the, there was a somebody, Project Kola, they were doing KOLA, I think it was, they're running OpenStack on top of Kubernetes on Tomacoast and they needed some something on the backend or they need more, either more ability to like mess with Docker or they were building their own OS trees for something and both Kubernetes and Docker, they were filling with. So yeah, I mean those kind of things, absolutely. We would absolutely love for the Atomic host to be as flexible and capable as anybody in the community wants it and in whatever wild configuration they want. The main thing is if we get to a certain point of demand for something we're not currently working on, we will just, as politely as possible, request that they come and help. There's only so many of us, we only so much throughput and part actually, so some of the talk had kind of a downtone and I apologize for that, I didn't mean for it too, but we as a working group collectively have a problem of everybody wants to accomplish really cool things, but then we get busy with other stuff and we don't get it done as fast as we'd like. Yeah, well in some of it is like we have to do it. I'm as guilty of this as any of us, so I just gave a talk about the layered image build system for Fedora. If anybody knows the history of that, I like to tell this story because I jokingly throw Matt Miller under the bus about it. I think it's cool, but a lot of other people don't actually think the build system is cool, they just want it done. So yeah, so it's just a large victim of circumstance really, it was one of those things where, but I will qualify or quantify that with the fact that we have gotten better in the last couple of months and we have plans to continue to get better and a handful of us who have been really locked up in other work are wrapping that other work up and are going to be focused. So I really hope that the future plans we're going to be able to start knocking out in a more rapid succession than we used to. So overlay FS is container storage default. For those who are familiar with the different storage engines in Docker, we want to do overlay two by default. There is now a Selenic support with the overlay driver, which is fantastic. So when I'm not wearing my Fedora shirt, I wear my set and force one shirt. Do what? No, when I'm at conferences, I like to wear like my like my number one, I'm, you know, supporting the Fedora number two, I scold people who set and force zero. Okay. So Kubernetes, I'm sorry. Dusty would know more. So, and that's actually something that we're, that's something we're toying with the idea of actually is creating an extra tag in Koji space and literally letting all of the components that go into atomic host move at a different cadence than the rest of the distro. So we can later on do things mid release and atomic hosts will stop getting a version number that's tied to a Fedora release, but will then be date time stamped. So we will, you know, version 2017 dot 01 dot something. I don't know, pick it. And that would just be the January 2017 release. And we will get to a point where we can, we can, we can ignore what current stable Fedora is doing. We can pull things from raw hide that makes sense that are actually stable and that we can run through the validation testing, those kinds of things. So the desire to do that kind of stuff because of the fact that the, the atomic host itself is a very limited package set. And we can, we can in theory, make very informed decisions about what kind of stuff we're moving that fast on. Yeah. So hopefully we'll be able to do that more rapidly in, you know, when the next, within the next year's time. There's, there's a bit of work that needs to be accomplished before we could enable that. But that is an idea that we're, we're hoping to achieve at some point. So beyond that, we want to get Kubernetes and OpenShift, both single and multi-node automation testing. And that is we want to first off spin up atomic host, validate it. Once that is passed, deploy Kubernetes on it, deploy OpenShift on it, validate that the clusters are working and then deploy applications on top of each and validate those applications that we deployed are functioning as, as expected. So that at the end, the goal would be, we can say that we have a full end-to-end environment for anybody who wants to spin up a container orchestration cluster at a rapid cadence that is continuously, you know, being released and iterated upon. And that's kind of, you know, the grand scheme goal. I have a, there's a objective that we kind of came up with during a Fedora Cloud working group, Fedora Activity Day a while back called Project FAO, which is kind of an enhancement on some of this stuff is, is the idea of making that something that we as a group, as a working group, advertise and deliver to the community, the user community in a, in a way that has, you know, good documentation, good track record of the, the automated testing, those kinds of things. I know we've been asking questions as we go, which I appreciate. I, I prefer it that way. Yes. Yeah. So I know this. You're crazy. That's how you're crazy. So this is slightly a little bit of a mega tech old. I know Atomic is considered extremely important by certain, you know, groups, people, teacher of the community staff. Right. And I look around and it's a kind of full result. It's like half full. And the other I know that from my perspective is that Atomic hasn't been promoted that heavily as part of the traditional, the lower way of doing things. Correct. I don't want to talk about this stuff. Are we going to be pushing it more or are we going to be making it more important? I think a lot of that, I think a lot of that is you need to build it first. And I don't mean that as like a slight. I mean, I think we as a working group have not gotten to the point that we wanted to before we just like blitz everybody with this is the magical future. I mean, we've been, we've been kind of, we have been advertising. We have been talking to people. We go to conferences and we do, you know, meetups and things. We try to let people know what's going on and show off the tech because it's really, really cool. But we kind of, you know, as a group have this idea of where we want to go. We want to at least get to, you know, milestone one before we do that kind of thing. But yeah, so in future releases of Fedora and also on the website, we have ideas around how to properly, you know, advertise it. It will be more front and center. And for the Fedora 26 website, there might actually be F 25. Yeah, F 25. No, I'm actually this wireless. Yeah. So, so it's, it's, you know, front and center now. I'll get Fedora and we do, yeah. And yes, however, however, like really quick, removing the tunnel working group half and putting on the release engineering half. We have spoken with the workstation working group and one of the things that they want to do is deliver flatpack desktop applications as OCI images so that we can at least have a similar build pipeline and transport agent so that they can maybe sit in the registry. So we don't have to solve distribution of images once and solve building of images once. So we've, we met with them actually three days ago, a handful of release engineering, a handful of the workstation group to try and scope out at a high level what needs to be done. There's a hand, there's like three things that they need to go off and do a little investigation on how they really want to build some things and put them into OCI. But yeah, so the goal is to try and share as much of the build pipeline from release engineering standpoint, but as far as the atomic host working group, we are targeting more container orchestrated cloud deployments. So almost server side, but cloudy server side. It's raining there. Also I've been, I've also been told not to call it atomic workstation. Challenge. Yeah. All of us is like upstream. Like, yeah, but or versus upstream, like where is that infrastructure lift? It's like this whole other question. So yeah, there's just it's spread out. But as far as what we should promote, I would like to get the point where we can promote. Because there's all these new features that are on the container frameworks. Like, yeah, like it'll relate to stuff. So we need both. It's just hard to do both. Yeah. And I think once we get, once we get to the point where that we really originally wanted to, and then we just didn't really, we just didn't deliver the way that we had planned. I think once we get there, it'll be easier to do those kinds of things. And, and yeah, I mean, admittedly, since us has been able to iterate on certain things faster, because they don't, they don't make a distro. They make everything on top of the distro in their free time. But he also said faster and slower. So slower for that part, but faster being once a day, you know, maybe we do build a new industry every time the news are in this release. There was a like auto building update thing earlier. I did all this in central, basically, you know, because we had a gap, basically, yeah, we have a slow moving stream and like basically a change of emotions. And, you know, outside the change should work. We'll have, we'll have to do it before that. This is why I asked a good question about prior authentication, because like you say, we have a few people working across the industry. And so the my agenda behind asking that question is that as a door QA, I have followed prior projects, but it's still not officially prioritized that much. So I'm kind of just yeah. Well, so Colin, the work Patrick's doing is going to satisfy the fashion slower, correct? Oh, fair. Yeah. Yeah. Okay. All right. Yeah. Why is it just to speak like when we focus on doing that? We have, we have one, what specifically Well, I know what you mean. This has a kind of further along in providing service to certain projects. Okay, let's put names on it. Norris Task Control right now was built with the focus of integrating with the doors previously existing with these process, right? It has, it has things that have been designed to the six month process in detail in a lot of detail back there. And it was never built with the intention really of what Mark, Mark, Mark, Mark, come on. Oh, I'm sorry. I don't know the time I'm saying about providing services to external projects that was never really focused on it. Whereas I've gone to ZNRCI weekly and it's like to bring a couple points together. ZNRCI is not really having to do much testing on the distribution. So you built something that's a service that an external project can use to test their stuff on the same level. And that's a thing you don't ask for from the community. Right. Yeah. And I mean, to be fair, when I want to, I guess, properly state my previous statement, it's not that ZNRCI doesn't make a distribution. They don't make a distribution for scratch. And that, and that's, that provides liberties because you don't have to have a testing infrastructure for 20,000 packages. Because there's a realist, there's a realistic expectation that they're in pretty good shape when the source lands. I mean, if we want to, if we want to like really get to brass tax, like there's a $12 billion market cap that says there's a reasonable expectation that's going to work when the source lands. Like, so we have to create the entire operating system. So that's what that system was built for. So yes, creating something that is more easily iterated upon from a, from like a, you know, upstream point, we need to get there. I don't disagree, but we can't just abandon like what's there right now. We are out of time. Do we have a last question before we? Yes. No, no, no. So, so there's, oh, sorry, repeating the question. If, if does, for the, for the automated OS tree builds upon changes to RPMs, if there are 10 RPMs that land in the stable repository, does it build 10 OS trees? It does not because it doesn't actually go on a, on a, like a fed message input flow. It does it on a Bodie update flow and groups of stable RPMs are batched into daily drops out to the mirrors. Because if we did that rapidly successive churn, the mirrors would stop mirroring our stuff. Also, it wouldn't be very useful. Yeah. Yeah. So that I got the little flyer that says that was our last question. We got a roll. If we want to, we can like chat in the halls and everything.