 All right. Hi everybody My name is Mike Bonet. I'm Gonna be talking about factory 2.0 Fedora and the future This is Excited to be here at Fedora. I'm excited to be giving a presentation in shorts It felt pretty appropriate for the venue and the time of year I'd like to keep this interactive. I know it was probably a little woozy and tired from lunch So feel free if you have questions just shout them out I'll repeat them and answer them as we go along and there'll also be time for questions at the end But yeah, I'm a an enthusiastic interrupter. So it's only fair that I get the same return So a little bit about me I've been at Red Hat for 15 years That seems like an incredibly long time, but you know lots a lot has changed in that time So it stayed a pretty interesting place to be started in professional services now release engineering One of the awesome authors of brew and Koji the tech lead for the DevOps development pillar And I'm a hiker as you would ever and that's me harassing an octopus But I assure you no animals were harmed in the making of this presentation So I work for Red Hat so what am I what am I doing here talking at a Fedora conference? Obviously, there's a lot of Red Hatters here, but the reality is that all the innovation in the Infrastructure space is happening in Fedora or should be happening in Fedora So we're going to be talking about how we're collaborating with Fedora infrastructure and how we're trying to To unify What's going on in the infrastructure space? So how many of you here have heard of the term factory 2.0? A lot of red app people are okay. That's cool So I don't have to go into too much in-depth about it I'll say really briefly for the people that are watching online or for posterity It's an it's an initiative to re-architect the Fedora and Red Hat software pipeline To add new capabilities and increase automation And really to bring in modern technologies So a lot of the tools that we use have grown organically over the past two decades And so those have well that's enabled us to do a lot in that time That's also caused us to grow a lot of technical debt. It's made it harder to innovate People started noticing this There were problems that people ran into when trying to release new products or to do new things and so factory 2 really grew out of that recognition that There were there were some fundamental issues and we really needed a more holistic approach to how we build and release software And we need to look at the tools that we're using to do it So out of that we have an initiative to unify the tools and processes where it makes sense And I say that because we have to recognize that Fedora and Red Hat do not always have the same Constraints or the same objectives with how we're releasing software. There's a lot of overlap. So let's Let's share the tools and the processes where it makes sense But then have the flexibility to diverge where it's beneficial for for both sides But a central part of that is doing things in Fedora first we want to To enable innovation in Fedora as Matt talked about this morning. That's really where we're We're seeing the innovation That's where Fedora lives that gives us the opportunity to try out a lot of new things and make sure we're moving in the right direction Some guiding principles of factory 2.0 Microservices everybody loves microservices It enables us to iterate faster on individual components to replace things as necessary Makes it simpler to to make change and that's really what this is about an event-driven architecture, so Everything is hooked up to a message bus both in Fedora and internally. There are some differences about the implementation but the fundamental fact is it enables Tools to be more responsive to react to things more quickly and get out of a polling model and and do things in real time And that's been a real real benefit And then automation and more automation everywhere Let's get humans out of the out of the loop as much as possible get them out of the critical path replace them with Automation whenever possible And this is a little graphic we like to put up the utopian future of our robotic overlords taking control No, I would not say they're in priority order Just things that we keep in mind when we're thinking about and talking about factory So factory 2.0 is not one thing there actually Ralph has a whole slide about what factory 2.0 isn't some of you may have seen It's not one system. It's not going to be delivered all at once But so right now it's made up of a number of different Projects some of which are brand new some of which we have contributed to and some of which we're really just reusing in new ways So this is this is a list of them We'll be talking about each of those today and I'll go through them and feel free to ask questions So the module build service creatively named is a service for building modules You may have heard a little bit about modules Fedora 26 was the first release that had a modular component the Fedora 26 modular server That was enabled by the module build service running in Fedora It went into production for the Fedora 26 release There is a lot of interesting stuff happening here We don't have time to go through it all but fortunately. There's a talk tomorrow That Thursday that will talk about the module build service in detail So if you would like I encourage you to go see that talk Ralph is Excited to tell you all about it So I will say that so that's been deployed in Fedora It's it's working obviously there are some changes that are going to be necessary for F-27 and beyond but that is Primarily the operation Arbitrary branching or what the beep happened to package DB Some of you may have noticed there were some changes in the way that Disgit is handled in Fedora There there used to be this thing called package DB and no longer exists So like so why did that happen? Arbitrary branching is about moving from branches that are associated specifically with a Fedora release to branches that more closely follow upstream versioning So why is that important? Arbitrary branching and modularity are closely related modularity allows you to create separate streams of components and Arbitrary branching let's you associate those modules with With a branch that is named after the upstream version so you can share the same sources across multiple modules And also enables the same multiple versions to exist in the same release So by breaking that tight dependency between a branch of source and a specific release You can now mix and match them in new and interesting ways. That's really what modularity is is building on top of So this was deployed to Fedora early in August 2017 It there's still some issues to be worked out. It was a little bit of a rocky roll out, but It's it's mostly rolled out at this point and is working Matt Prail has a talk tomorrow That will cover all the glory details We're very appreciative of people's understanding and patience while we're getting everything squared away in that area It means that when you will build the Fedora 27 workstation, it means that HTTBD will be named by 22.4 or Like it's now in this deep Fedora 27 Right. So that so the question So the question was does that mean that rather than having a an F27 branch of HTTBD You would have a 2.4 branch of HTTBD and that's exactly right. So you would I mean, it's up to the to the prerogative of the maintainer, but Essentially, yes, you would create a branch that follows the upstream Which I mean for HTTB it makes sense That would be 2.4 and then that branch of source could then be rebuilt in multiple modules So you could reuse the same sources for potentially the HTTB module in F27, 28, 29 Until you, you know, create a new HTTBD 2 such maintainer will has to respond to the composer or stuff like that Okay for Fedora 27 workstation, you would use this branch or Yeah It does mean that yes, so there's gonna be a trend there's there's a lot of conversation in the room. I'll just repeat for people online So there was a question about how F20 what F27 will look like For F27 there will be both a traditional and modularized server and a traditional workstation so the modularized server can be built from upstream from arbitrary branches from branch to follow upstream versions the workstation Dennis I'm assuming will be built from traditional of 27 branches So there's a transition period where you'll be able to do both you can have F27 branches will be created for everything and then packages can choose to Get on the modularity train use arbitrary branches and then start modularizing their packages into larger components Any other questions? Just for clarification That's Matt over there whipping all those arbitrary branches into shape Don't get a man Results TV. So this is really the workhorse of a lot of the a lot of what's happening and in fact at 2.0 and Fedora This has been around for a long time. It's actually been deployed in Fedora for a few years So you should be familiar with it It abstracts the reporting of test results from the systems where those tests are executed It creates a query will interface so there's a lot of systems out there that are great at executing tests and You know creating creating different environments writing the tests in them. That's great Most of the systems are not great at presenting test results to anything outside of the system where they were executed So it'll see be the single query will interface For test results, and that's really a significant enabler for a lot of work. We're doing the rest of the tool chain The centralized nature of the of the test results is also really key as opposed to having to search in and different testing systems to find the results you care about So there it's currently being populated by a number of different services including task atron open QA and auto cloud and So factors relying on the results being in results DB to drive a lot of the automation that's coming later But we haven't been doing a lot of work on it. We consulted on the effort to extend it to support atomic CI So it's it's you know, it's the workhorse for a lot of what's happening behind the scenes. That's a tiny workhorse Results to be as a much bigger workhorse So waver to be this is a new service that came out of Factory some ideas around factory So we were to be as we use when your tests are unreliable. So in a perfect world Every test would run exactly as it was designed. There would be no infrastructure problems. There would be no race conditions If a test failed, you would know that you had a bug in your code You would fix it and you would resubmit it in practice. That's not always how it works, right? We know that there are infrastructure issues that can cause tests to fail unexpectedly There are poorly written tests Either rely on external infrastructure that can't guarantee to be there or that have race conditions So all kinds of reasons why a test may fail incorrectly so We've actually seen in I think both Fodoric ogn internally a lot of times people just deal with this problem by like resubmitting their bill, right? They have a test a unit test I've run it that's running during the bill. It Gets Allocated to the wrong node the test fails and they just you just retry and retry until By by luck or or magic the test succeeds. So that's a pretty crappy way to do your work We ever do be allows Allows you to say I know this test is failing incorrectly and it's fine for now I don't want to block on it because I know that it's an infrastructure problem or a bad test So let's treat it as success won't block the pipeline and we can move on with the next step in the process So that is currently deployed on Fedora's open-shift instance It is In the process of being integrated And there's a couple of things that we're working on in that area Any UI integration with bodys so that would eat that can easily turn a Test failure that shows them the bow to UI into a waiver And there's also been talked about an auto-waving service. So This is something that actually exists in another tool we use internally It's a way to say with more detail that if you see a failure with these parameters and this text in the log Then I know this is a bad test. So disregard it Ideally we would not you would not be blocking on that test to begin with And so I'll talk a little bit about that in a minute about how we want would prefer to be dealing with test failures like that but There's also thought around just having a service that would look for those kinds of Conditions and then automatically file a waiver We were talking with step about We can do a similar thing with waiver to be where it watches wave over time and tries to learn the pattern So it could leave based on the kind of decisions that we make about what we believe. Yeah. Yeah, I like that idea if anybody wants a Master's level research project in machine learning, I think we might have something for you Yes So waiver DB itself is a one-time waiver So your your specific bill with a given of VR is tested that has a Failure that you know is is in error because infrastructure was down and so you wave it Auto-waving would be the more continuous like this test is always failing So rather than having a human have to wave it every time Just you know, wave it wave it every time you see it, but So there's actually a lot of thinking around that some of that is just Leaving a test that's run running that you know fails every time is probably just add practice So we would encourage people to either disable that test or You know do something else to fix it more permanently rather than just having this you know ever-growing unbounded list of waivers Or and even if we were going to enable something like that I think we want to probably time bound it so you have to come back and say is this still failing for the same reason So there's there's policy and thoughts to be flushed out around that So that's there's the other some of the questions I Will about to get to that But watch so for for the for the UI interface it could be it wouldn't it's not just gonna be Bodhi It could be anything so Any any system that allows you to review test results could potentially have a button that says make this a refer So I mean initially will be Bodhi, but if there are other services and we'll actually talk a little bit about that Later in the talk we'll talk about some visualization stuff. And that's another place where We have to be integration could be possible as well So Green Wave this is This is sort of what you're talking about So Green Wave is another new factory service Green Wave As a service that allows you to define a policy to say what tests are required to Exist and to be successful For a piece of content to move on to the net to move to a new state So to basically move through workflow it correlates data from results DB and waiver DB and That's how you so that's actually what you're waving So you you see a failure in results DB you create a waiver and waiver DB and then Green Wave Puts all those together for a given piece of content and says based on all the data I have from these two services is the policy that's been defined by a human you know by Release engineering or product management or whoever is that policy satisfied? And if so then return a success result and that indicates that you can move from one state to another Every policy is associated with what's known as a decision context and that basically defines the state change so something can go from you know built to To the staging staging to production or you know pre-release to ready for release It's really flexible in that you can define the decision context and you can build up a custom workflow around them so It's but it really just deals with that one that one state can it go from state Is it related only for the further of 27 server or for all packages which already It is going to be Gating for for Dora 27. I don't think is it going to be getting for all packages or just I thought it was just a subset of packages So we don't as far as I know there's there's a pop. Sorry. I'll just say that The what they just said was that it's possible to define Different policies for for different releases. So fedora 27 would have one set of policies And then for our atomic would have its own set of policies that are related to tests that are specific to atomic I'm not sure if it's going to be gating for F 27 for all factors I don't think we have policy defined for all packages in F 27, but It's possible to do it for for any set of packages So that's one that we're looking at as we have more automated testing Then we can start using that to do more automated gating and gating enforcement So it's currently deployed in fedora's over chipped instance also, and there are a few things on the roadmap for this per package policy right now policy is scoped per release Or you could say per product So with there are use cases that people indicated to have a different policy per package in that release and so we'd like to investigate that messaging integration Right now it has a rest interface that services can hit to request a result on a decision context for a given piece of content we would prefer that to be a Published model or an event-based model. So when new results or waivers come in results Or sorry green wave would evaluate those against all the relevant decision context And if any of them have changed state it would publish a message to the bus that work is actually in progress But we're gonna really to then integrate that with services that would consume those messages and time-based policy maybe There there's potential a potential requirement for different policies at different points in the product the life cycle So as you're approaching Release you may want more stringent policy to reduce the amount of change in the In a product as you're approaching Release so to stabilize it. So There's no concept of time-based policy yet, but that's a potentially interesting Interesting feature Okay break time everybody's up with me our dealer All right So we're gonna do a little pop quiz. We'll see if I will people know their infrastructure So how many tasks has Fedora Koji executed since it was brought online? Anybody? 60 billion billion. No, that's that's optimistic, but What 20 million is very close I Well clearly people are cheating so I could have been a little more specific as of 949 this morning. There were 21 million five hundred twenty eight thousand six hundred and forty-six For the primary New people are gonna get me for not being specific enough So that's a lot of tasks Your own question is wrong As is usually the case So that's a lot of tasks There are there are a lot of tasks that happen automatically in Koji repo regions and things like that But a lot of those tasks were performed by people for a lot of those are builds and a Lot of the the things that the tasks are interacting with were created by people so creating tags and targets We want to reduce that number right we would like At some point maybe every task in Koji is just that is kicked off by automation So we we can reduce the amount that people are interacting Directly in blocking on the infrastructure make things happen automatically and then they can get notified when the results are ready to go All right, so next question When was for door for one released I Expect that you know this like right on the top here does like November That's that's really good If I had any swag to give you you again, I don't be I don't be unfair and I probably couldn't accept it So yeah, that's that's a long time ago right that's almost that's almost 14 years ago a Lot a Lot has has changed in that time But a lot is still the same in the way that we build and produce and distribute software so You could say that it's maybe time for us to get a little more creative a little more innovative with The way that we're changing, you know how the distributions put together And I think we're seeing that with modularity and also how we how we build test and distribute it And that's where some of these ideas of around factor coming from Or to take a phrase from great to codexburg's playbook, this is a time to blow shit up sure And then how many RPMs are in the fedora 26 GA everything x86 64 What 80,000 Lot of a lot of good answers there. I think we cross across the spectrum 53,912 and based on the data we got from Matt's talk this morning That's about 215 per active fedora contributor It's a lot. It's a lot of content. It sure is like 5,000 of those are spots, though It'd be nice if I had a graph here showing how that's trended over time But the reality is that's a lot of content if it's not already That's too much content for humans to be managing intensively. We really need automation to To enable us to continue to grow the amount of software we're producing to increase the quality of it and to keep all of us from having no life and going crazy, so This is this is part of the drive for for increased automation taking humans out of the loop as much as possible and you know letting Letting things happen and only including people when it's absolutely necessary to have a human look at him looking at this stuff All right that includes a pop quiz are ready to go on people sit with me. All right Let's talk about Bodhi So Bodhi everybody knows Bodhi is the fedora update mechanism It's been around a long time. It is also grown and developed quite a bit in that time Most recently it's been enhanced to display the result of a greenway decision and gaining on greenway decisions has been deployed a Deployed as of last Friday. So congratulations to everybody who was involved in that work And it was a big push right at the end there so Bodhi has been able to display the The results of our data from results DB for a while, but there's actually just a UI level thing. It was not Not really integrated into any of the workflow that Bodhi was Was pushing so gaining is actually Bodhi making decisions based on the result of Feedback from greenway That's really big and that's that enables a whole bunch of things right so now we can run tests automatically and Eventually get the point where we're pushing Pushing updates automatically, so we no longer have to wait two weeks for karma and You know have humans clicking the button to say okay, please push this now We have the ability to let the let the automation take control That makes a lot of robots very nice So Bodhi's no longer about star pms either and now supports module updates as well and it's extensible to other content types Randy Barlow mentioned that he was working on container integration So there's there's a lot of work happening in Bodhi encourage you to attend the Bodhi hack sesh later this afternoon Close some of those 300 tickets But you know Bodhi is is pretty central to a lot of the work around automation of door because it's the it is the fedora release gatekeeper Plants So the question was since internally we have a different service which acts as the the update mechanism plays a the role internally that Bodhi does in fedora. Is there any plan to merge them and So the the most direct answer is no there's no plan to directly use one or the other but What were the way we're approaching this in factory is to? Extract and Centralize the pieces that are reasonable. So that's really where you know waiver to be angry wave are coming in and and even results maybe so You know internally we we have a service that really acted as kind of the orchestrator for the whole pipeline It had it had a notion of waving certain test results It stored its own test results. So we're in the process of Decoupling that and moving them into separate services. So So that's why we were to be exists and then So naively if we just if we just had results to be aware to be then every service that wanted to make decisions Based on test automated test results would need to do its own logic for you know Do is this policy satisfied or do I have the right set of tests to move on to the next state? And that's where green wave came in right so rather than letting both our internal service and Bodhi Define that logic for themself. We've extracted it into another service and so that's a Model we'd like to follow another in other places where maybe we have different services But there's enough overlap that it makes sense to consolidate the the logic So we've got another new factory factory service called fresh maker Fresh maker triggers rebuilds of content based on events from other services in the pipeline It's it's another piece of automation rather than requiring humans to see that a RPM was built and Rebuild a module or do the modules rebuild rebuild the container We define triggers and fresh maker to listen for those events and then trigger the appropriate action Short-term goals We know we want to rebuild a module when either module MD or the spec file is updated It's no point. It's how data means when it's a commit is pushed to disk it. So obviously there's no point in In committing a change to a spec file and all MD unless that thing is under built. So we always want to do that We always also want to rebuild containers when our PM ship to stable So we want to keep containers fresh. There's an expectation that containers have relatively Relatively up-to-date content And so we want that to happen automatically Yonk Luja has a talk about this Tomorrow I encourage you to go see that talk There's actually a lot more complexity around what fresh maker is doing than just Then there's trigger and rebuilds on every event. There's policy decisions to be made about how often we're shipping content When what events trigger what rebuilds There are different, you know for Pug fixes versus security issues. There are different things that would need to be rebuilt There also there's also policy. I'll talk about this in a little bit around how often that built content is pushed out to to bearers and to users This is also how we save Adam Miller sanity. He won't have to be building rebuilding keeps of dependent containers manually So odcs odcs is a service that came out of a known problem With or something that that we realized was a problem when implementing fresh maker, which was How do you? Well, let me step back a minute The desired Experience is that when you ship content new content, let's say for for a bug fix or security fix Every delivery channel gets the same content at the same time. So You know, if you're consuming our PMs, you have them at the same time that you have containers at the same time You have OS trees same time you have Q-cows right doesn't matter what the delivery mechanism is the content is always up to date That leads to a question of how do you get pre-release content into your Into these aggregate aggregate content desire content You know in a manageable way in a way that's that's auditable that doesn't involve manual steps That doesn't involve humans Doesn't involve typing random things into text files or into command lines So out of to solve that problem odcs was born odcs is the on-demand compose service And it does exactly that it creates composes which in the general case or at the moment is basically repose young repose of signed pre-release signed RPMs of Pre-release content and that is then used to Feed into the build process for all those pieces of aggregate content. So containers OS trees Q-cows It can also generate repose of module content for container for module testing So task-atron tests rather that I think right now basically all the testing has to download the entire Set of module content build a repo itself and then run the test on that. That's fine It's just not really efficient. It's not particularly efficient And it's again is error-prone. It's a lot of Boiler plate that has to be reproduced for every test. So we can centralize that logic Make a service that will handle it and it can be reused for that So essentially odcs is the thing that takes a bunch of ingredients and turns it into something that you want to Consume and that you're capable of consuming On the one service means that when you have two modules where one depends on the other and you will Change based on the CV or whatever first one then it will automatically Regulate the second one like HDTPD lamp. You will change HDTPD and lamp will be So the question was if you have Two dependent modules module A depends on module B. You have to rebuild More than one module for a security fix chain will it yeah for a chain remote will it Do rebuild or will build repos for all of those models? And I think the answer is yes The the dependency Dependencies will be tracked and all of the whatever modules have been rebuilt to satisfy that That content update will be the repos will be built and made available to To the rebuild process. So yes, that is a known use case and it is The right the odcs isn't rebuilding anything it's just it's building the modules Or I'm sorry building the young repos of those modules and then another system will Consume the content from those models from those repos The idea around this has been to Potentially promote these continuous composes to be releasable So that's something that could work in a in a CI model we Don't have strong primates around it yet, but it's something that's potentially worth investigating We can talk about it and Then visualization so this is something that I would love to have some help from the Fedora community on Lots of things are happening automatically now. We have lots of different services listening to events triggering other services So how do we know what's going on and how do we know why things are happening? We need to keep up we need some way to keep track of where in the process any particular piece of content is so the expectation is that you make a change to Yeah, a spec file or module MD How much of at the end you expect to have a whole list of artifacts where how far along the process is that? How many modules have been rebuilt how many containers have been rebuilt how close close am I to being able to release this change? I just made We have pieces of that today in separate systems Green wave can tell it or sorry a fresh maker can tell you why it triggered off a particular rebuild The MBS has reporting capability And it's in its UI, but we don't have a holistic way of saying what are all the changes that are going to happen and how many of them have happened already So it would be great if we had that anybody have any ideas I Have a question if you want to contribute to where should I go? We're gonna get to that at the end But the reality is this this will Well, this will likely be a new project that that we actually don't have Scoped at the moment and we like it's probably not something that we're going to be able to tackle before f28 But it's a known gap and not in the system So we want to know we won't need to be able to track the progress of a single component Track the progress of a module build and track the progress of a chain of container rebuilds I think it would also be nice to be able to track how close are we to shipping? F28 or F29, but that's maybe a separate problem So All the data we think all the data is there if not it can be added but aggregating it and making it consumable by people as It would be an interesting and significant project to take Yep Pick it up. Just come talk to us as you're working on it because we love to use it internally Alright, so plans for the next six months So we're still playing whack-a-mole with arbitrary branching issues, but we're getting close to I think having that Squared away and you know a hundred percent functional We're completing deployment of services into fedora. So fresh maker and odcs deployments are pending They're just waiting on some VM provisioning and those are in progress Policies we touched on this a little bit earlier, but there are questions around When to rebuild modules and containers? How much to rebuild so how far down the chain you need to get to be sure that your content is Secure or is fresh what level of testing is required to release something and How often push releases to mirrors We have a lot of a lot of new tools Which means there's a lot of questions around how to use them a lot of different usage models We're making an intentionally flexible And extensible Architecture here. So that means we have a lot of ways to improve things, but we don't don't have the Policies and the the approach is correct. Then we also have ways to shoot ourselves in the foot. So We expect there to be a lot of discussion around those things. We'd be happy for people to participate There's going to be continued modular logic modularization of the distribution It's just modular as everything. So that's that's in progress and there are some modularity priorities that are Are queued up for 27 Supporting the compose process. So that would be any changes required to pungy to support a modular compose That is working at the moment for server But as as we're dealing with more modules and more content, there's need to be keeping out if there are any issues there Deploying the modularized masher and bode That's actually already in progress and a poor request has been submitted. It's being reviewed Efficiency improvements in MBS. So just making sure MBS is building content where it's necessary and where it's actually required but not overbuilding so just We don't want to be shipping content to customers that are to the internet users That's been rebuilt for no reason other than Our tool chain is built out of you So we need to figure out policy around that and and how to verify when content Needs to be rebuilt and make sure and make sure it is and also when it doesn't What does this all mean for fedora? So increased automation and new capabilities around building So soon you may no longer have to type a package build MBS and fresh maker are going to be working together to make sure any changes in content are rebuilt in a timely manner So maintainers are going to be free to think about the actual changes to the content that are required And then the infrastructure can go off and make sure it all ends up in the right place so that is Think that's something that has a potential to really change the maintainer experience Get rid of the repetitive work and You know just let people innovate in their packages Testing it will be easier to add a test They'll be run automatically and the results will be visible and actionable by humans where necessary and then as we talked about before Automation and machines will be able to make decisions based on the results of those tests And then releasing there's more automation fewer manual steps and roadblocks. I think we're going to make One of the goals is to make your lease engineering a lot happier with their experience. So remove a lot of their manual steps You know automate where it's possible and of course working toward that utopian vision of continuous delivery Maintainers start thinking about content in terms of modules and module streams rather than individual packages You know the idea of a module is less about Specifically what what packages are doing with and more around use cases and and features You can tailor your modules to those specific use cases and then worry less about the specific implementation of every piece and Of course check out the talks on Thursday about how to create your own modules and Get involved. We can't do this all alone. So There's some links to some of the projects I talked about today People also hang out in pan fedora relinch on free node come talk to me and Ralph If you have questions about anything we've talked about today, or you want to get involved figure out how to participate Any questions I Yeah, the question was if there is any plan to integrate the testing that happens in the check section of Rpms with any of this infrastructure, so maybe see the results and results to be or to be able to waive failures in the check section of an RPM So Steno really pointed out that if your check section fails your build fails But you know there it would be possible that to change some things at the RPM level or some clever clever spec file hackery to maybe Not fail the build but promote the results out to another system We haven't talked about that in detail But that's something that that could be investigated It's an issue an idea because there's a lot of testing that happens in check sections Yeah During Yeah, the comment was that there's a lot of testing that happens in the check section at build time It'll be nice to not to not lose that That those tests and have the ability to control them at a more granular level than just build succeeded or build fail And I agree that would be great, but we would need to do some work for that Anybody else? Well, that's all I got. So you have the candy swap You