 My name is Threebeard, I saw you, can you hear me in the microphone? And then if I speak into it, oh there it is, testing, testing. Alright everybody, let's get started. Can you hear me in the microphone? Can you hear me? I have to speak directly into it. Maybe I'll, I'll just do that. I'll just hold it. That works. Hi everybody. Hi. I'm Ralph with NoBeard. My name is Ralph Bean. I work in PNT DevOps now on the productization pipeline used for all Red Hat products. I used to work on the Fedora engineering team for the last four years, working on the tool chain there, all the tools that go into making Fedora something that is able to be shipped and actually used by users there. So, talk today is about Factory 2.0, which is, well, we'll get into exactly what it is. I want to start with some backstory, some context. Can I have people in the room? Raise your hands if you've ever heard of the eternal September, or the September that never ended. It's a minority of hands, I think. So this is educational backstory, hacker lore that you get to learn today. Back in the day when the internet was first becoming a thing, every September there was this tradition that happened. There were a very small number of users on the internet that had their own culture, their own way of interacting with each other, and every September a new influx of university students would go to university, get access to the internet for the first time, and there was this process of about a month in September where the existing internet users would teach the new internet users, the new students how to behave on Usenet, and then after that they were acculturated and things were fine. Well, in 1993, that's when AOL online, the internet service provider, first provided access to users to Usenet, and the influx of new people coming to Usenet was so great that it was never possible for the old guard to kind of keep up with educating new people on how you should behave on the internet, and so it's called retroactively the September that never ended, or the eternal September. So that's a thing that happened, and now you know. What that means for us, I think it's useful to draw an analogy between that point in time and what's happening now in open source ecosystems and that it's changed dramatically from when Red Hat first started as a company, from when Linux distributions were the primary place to go to try and get your software to be something that people would use and people would use as a platform to integrate on top of. This is a graph showing the number of packages in different language ecosystems, RubyGems, PyPI, and NPM, or Node.js. If you can't see the absolute value, the number of NPM.js packages here in like halfway through 2016 is crossing the 300,000 mark. We have 17,000 packages in Fedora right now, which is more than we actually want. It's more than we can handle or maintain. If you think that we're going to be able to somehow keep up with 300,000 packages just for NPM.js, that's a losing proposition. So the point of the analogy with the eternal September is that things like the rise of GitHub and other practices in open source communities just mean that the times have changed and the whole model of how we've built Linux distributions has arranged around a time that doesn't correspond with how things are now. It's a different era. That graph and things that have gone on have had consequent changes in the way that application developers build their apps, whereas they used to look to the Linux distribution as a platform that was common, something they could build their application on and pin their dependencies against that. Today, people do things like this. PIP Freeze is a command for Python that takes all of the specific versions of all of your dependencies in your Python virtual environment, freezes the version and writes it out. Anyone who wants to run your app has to use exactly whatever you had installed at that particular time. That works great for apps to be able to isolate themselves from all the changing things in the ecosystem around them, but it works terribly for a Linux distribution where we're providing a certain set of packages which almost certainly don't correspond with whatever the developer had installed at that particular time. In my opinion, modularity, the whole modularity effort is our attempt to address that situation and make the Linux distribution compatible with the new way that apps are developed and cooperated around. Our entire tool chain is built on a way of building a Linux distribution that isn't compatible with that, so we have to think about the tools, too, which is what we get to here. Factory 2.0, if you've seen other talks, it's not a lot of things. It's not what you might think it is. There's no single Git repo where you can go and check out the Factory 2.0 code. It's not a project in that sense. It's more of an initiative of looking at our tool chain as a whole and trying to make changes to it across the board. You can read our design documents for more on that. Today, I'm not going to cover all of the things like I've tried to in the past because it remains too high-level, and instead I'm going to try and focus in on just three of them. I'm going to talk about measurements, continuous integration, and the modularity effort specifically. My diagrams are going to be weird at this aspect ratio. We'll roll with it. This is a depiction of the existing pipeline. There really is nothing to the right of the diagram. In this case, it just kind of ends with the release checklist. So this looks nice and simple. If you've ever developed on Fedora or Ferrell or for Centos, you know that things then get built in Koji and Brew. They go through a compose process and then a release checklist to get out the door. It's very, very simple, right? If you've ever used it, you know that there's actually a lot more things going on around that and a lot of things about it that are more painful than just moving right through. For now, let's just pretend that things are like this because we're going to elaborate on this basic framework of how things get through and show where there's problems and talk about changes to that. To start with on measurements, this is a screenshot of a dashboard that we've set up internally. We're not doing the same work for Fedora. We're just doing it for the internal pipeline because it's not as easy to duplicate across the two. One of our mandates is to measure things that are a problem with the current existing pipeline to determine how bad things are so that when we make a change, we can be sure that we're either improving the problem or maybe our change had no effect. Here I'll explain these two. This is titled the RPM diff lag. There's a tool called RPM diff that we want to run in response to every new RPM that's being built in the internal pipeline. On average, if you pick a day, this day on average there was 100 hours of lag in between when something was built and when RPM diff was run on it. Having RPM diff results is actually a really useful thing. That should be a small number. That should be zero hours or one hour at most on average between it, but we have 100 hours today. This graph is titled the RPM Signing Lag. When things enter a certain state in our release process, we want them to be signed with a GPG key that signifies that they're gold, they're good and coming from Red Hat. That currently is a manual process and in this graph it's much, much worse between when things are allowed to be signed and when they actually get signed. On this date, the average was 800 hours between those two things. That's a huge amount of wasted time just waiting for something to happen and that's unacceptable. The point of this is to say that both of these are things that we're intending to either solve by automation or by shuffling things around in the pipeline so that things can just happen when they need to happen. We would like both of these graphs to go down to zero, but the point is that we're gathering metrics ahead of time to be able to see what the problem is. That whole project of doing measurements is much more complicated than we originally thought it was going to be. I think just gathering metrics seems like it's simple, but the problem is that anything that's a useful metric that gives us some real insight involves collecting information from multiple systems in the pipeline and correlating it together. As an example of one of our more complicated ones, we have a mandate to make building content into containers as easy, as painless, and as quick as possible. A metric to measure that would be anytime you make a change, say in Fedora, to an RPM package to a spec file that then has to get built as an RPM and once it gets built as an RPM, it can be included in a container. If you want to figure out how fast it is, measure the difference in time between when a container gets built and when was the last commit to any spec file that was included in it. It's hard to get that information in the first place. It's possible. We have code to do it now, but it gets really tricky because what about if there's one RPM that, say, hasn't been updated in three years because it's just super stable and it's not going anywhere? So then that time between the commit and the latest container build is three years and it throws our average off. So thinking about what the metric means and how to produce it in a way that will actually be useful is just harder than we thought. I'm going to say on that, many more metrics to come on that front. So to change topics, continuous integration, this is a depiction of how continuous integration looks today, both in Fedora and in the internal pipelines. This is our friend diagram at the bottom, the existing build-to-release pipeline. And basically all that we have is that each of those systems announces activity about new content that's either been changed or built or composed to a message bus. And then we have, I have this cloud up here for CI infrastructure. Excuse me, CI is in quotes for a reason. Can anyone tell me what is wrong with this diagram? Pingu. The CI is after the change. That's one thing to think about it being wrong. Any others? That's also true. So another way to say that is that there's no gating. The CI doesn't actually impose any restrictions back on the pipeline. Yeah, the arrow from the CI doesn't go down far enough. Yes, yes. That also, although... Sure, yeah, that is a problem, but I don't know if the diagram metaphor... Yes, yes, exactly, exactly. Oh, moving on. It sure does. It's a beefy miracle. So the first step that is on our road to solving this problem is we want to introduce this service. We're actually co-opting it from the Fedora infrastructure called ResultsDB. It's how you start to get... I don't think it was Adam that said it, but it doesn't go down far enough. It's the way that we begin to get that arrow to go down. Notice how the CI infrastructure is in this big cloudy shape, and that is for a reason in that our existing... what we have been calling our CI infrastructure internally is a whole set of Jenkins masters and Jenkins slaves that are available for various teams to run whatever kind of tests in them that they want. And that flexibility is good in that it allows test teams to move at their own pace, make their own decisions and do their own things. But it's bad from a release point of view, a build and release point of view from the point of view of my group of DevOps, because we have no way to consistently query for the results of that information to make decisions in the build to release pipeline, to make decisions about whether a branch should be accepted or not, to make decisions about whether or not something should be gated or not. So the first step is, again, collecting those results in a homogenous fashion behind a homogenous API so we can make decisions. Some more detail on exactly how we're getting that data, because it's not quite so simple, right? It's this big fuzzy cloud of many, many Jenkins masters. This is to show where we're at right now. We have this project called the, we're calling it the CI metrics data feed coming out of platform engineering. That feed was originally established to go through the message bus and populate an elk instance for reporting on metrics about what kind of tests we're running and how frequently we're doing it, not to make decisions about automation. But we are working with that group and we're collaborating to use more negative to work co-opting their data feed, taking it and using the exact same feed to populate ResultsDB using a microservice we've put in front of it called ResultsDB Updater. And this is done and is in production right now. The better plan that is coming down the road and we have agreement with the groups, but it's a matter of getting all the pieces lined up to get there is there's a separate initiative called ShipShift that's about taking all of the artifacts from these Jenkins instances and making sure that they get archived to cold storage so they can be saved forever and looked at. They want to publish information to the message bus also to have workers pull those artifacts out. We want to listen to that feed and use it for the same purpose to store up this metadata in ResultsDB so we can make automation decisions based on that data coming out the bottom. So the key difference is that one is only for one group inside Red Hat, the platform on the left. The other will encompass all the various business units, all the different products in Red Hat so we can have a more comprehensive solution at that point in time. So here expanding the picture a little bit more, this is putting ResultsDB to use, right? Once we have that populated with information there are multiple points in the pipeline where we want to query that data to make a decision about whether to move something forward either automatically or to gate something and not allow it forwards. Starting at the far end in the release checklist like Bodey, the Bodey updates process in Fedora or the Arata tool internally. Today the way this works internally is that when the CI infrastructure completes its tests engineers go and look at the results visually with their eyes to understand whether something worked or not and then they go manually to the release checklist tool and check off and say QE tests passed automatically. We would like instead this tool to automatically display the information from ResultsDB about whether tests passed or failed and then depending on rules that are defined on a per product basis. I'll get to that in a minute about how we define those different rules. Either automatically progress the advisory or simply allow it to move forwards upon some sort of manual click. Here we have determined that we have a need for an automatic rebuild service especially with the advent of containers. We're going to have many, many more different kinds of combinations of content. The process of making a patch to something like a 3-day manual effort to rebuild the world is not acceptable anymore so we need automation to drive that. That sort of service needs a way to consult the results of QE output by looking at ResultsDB to know whether to progress to subsequent tiers or not. And lastly, and this is what Pingu was commenting on early, disk it itself, the real dream for CI is that any change that gets proposed in disk it before it even gets merged is something that needs consultation with QE feedback with the results from CI runs which is we want to gate right here before we get any problems in the pipeline at all but by using ResultsDB like this we can use the same runs from CI, we can use the same set of test cases to drive the decisions made at each of these different places. In the best case all the problems get caught here and nothing happens really in the rest of the pipeline but reusing it I think is important for a redundancy and a resilience perspective. So now we have a complication. In Fedora we do not have an answer to this solution but we do have an answer to it internally. Baked into the Erata tool and some of the tools that form a constellation around the Erata tool like RPMDiff and others there is functionality for waivers meaning if a result coming back from automated tests is wrong a human can say this doesn't really matter just wave it it's fine it can move through. In Fedora we have integration with our updates tool with our test execution but if the tests are wrong and something is being blocked and it should progress anyways the solution is just to change the requirements on your update so it can proceed through which doesn't make sense, right? Having something abstract like this I should explain what this is. WaverDB is a new service that we have a requirements document for in process but we need to actually write and implement that will house an abstracted version of the logic around waivers from our existing tools which will then repurpose and use throughout the pipeline. So to show here the results that go into results DB are immutable and come from automated test runs and we don't want humans to modify any of the results in results DB so they give an accurate reflection of what happened in the test automation. WaverDB however would be human input only where you could wave an individual result in a particular decision context they wave it for a disk it changed but not for the release checklist and furthermore specify auto waivers so if you know a certain kind of test case it's just perpetually broken in these cases but not some other one. That's complicated logic that we don't want to have to duplicate in all our other tools so abstracting it out into its own service makes sense. Here in the middle I have this note about tools cross-reference entries in these services to make decisions and see how there's duplicated lines going from every service to both of those services. That to me is a key that we need to introduce yet another service here. We're working on a name for it we're only calling it the policy engine but it would be the tool that would understand exactly how to cross-reference entries from those two services and make decisions about what should be done. For instance each of these services is going to ask a different kind of question to the policy engine. Given a proposed patch in disk it we want to know is this good to merge or not. In the policy engine we would need to define ahead of time a policy for this kind of product group or for this kind of subsystem. These are the conditions that need to be met for something to be good to automatically merge or not. If they're met or not then we proceed. It's a different set of conditions that may need to be met for the automatic rebuild service and a different set of conditions again for something to be good or not to release so abstracting that in policy engine is the current plan. This last slide on CI again to Pingu's point about blocking branches before they actually get merged the key will be to stand up a pull request interface on top of disk it which I think is going to take different forms in the two environments that we're looking at. In Fedora almost certainly it will be Pagger just because it's the people in the Fedora community already know it and love it but I have a hunch that deploying it internally is not going to be quite as simple or quite as palatable to people around so there's already familiarity internally with both GitLab and Garrett so it'll be a question of deciding which of those two interfaces is the preferred one and standing it up in much the same way. A thing that is interesting about our infrastructure is that we have a lot of implicitly defined workflows that go between systems so currently are again what we have been calling our CI infrastructure the both Taskatron in Fedora and Jenkins internally know how to implicitly respond to new artifacts that are available for testing so when Koji finishes building something it announces it on the message bus and CI picks off and takes off and does its thing. CI systems that work quite well in other environments for other organizations tend to instead of being implicitly defined like this are explicitly defined so if you see like for instance the work that Ari was demoing earlier with Jenkins pipelines or if you see anyone like in upstream OpenStack where they're using Zool in both of those cases this circuit is defined in some sort of file like first there's the build step and then there's the test step and then there's the decision step in Jenkins land with Ansible and Zool it's much the same way but we already have a wealth of tools in both environments that have this implicit step so we're going to have to think creatively about how we write the trigger in the pull request interface how to kick off the first step in the process and then wait for the appropriate conditions to be met with some sort of timeout in order to proceed that's just to point out that I don't think we're going to be able to copy any sort of pre-existing solution to be able to graph this onto our infrastructure that we have and replacing wholesale is too costly in my opinion. Okay, new topic, building modules this is for Fedora 26 and going forwards my team has been working on that we're also extending effort towards the modularity project in Fedora this is a depiction roughly of how the module build service which is a new tool that we've written with the modularity group fits alongside other pieces in the pipeline the main point is to show that the actual work of building the RPMs that become a part of a module is actually done by Koji or Bru the build system that we've been using for an eternity now but the module build service serves as a sort of workflow orchestration layer on top of Koji and Bru to describe what it needs to do I'll move to the next slide the way that things are built today is that we have a tag for a particular release of the distro the F-25 tag and as you build a particular RPM the build root for that build needs to be available to meet the requirements of its build requires come from one repo or come from one tag and once the build is completed it is added back into that tag in most cases or sometimes added to a candidate tag and ultimately promoted to be in that tag a consequence of this is that that tag is a moving target and you can't just say I depend on F-25 at a particular day or a particular moment to know what the contents of that build root are build roots with modules involve something different those tags, those mega tags like F-25 are something that are currently created by RCM by hand at the time when we declare we're going to do a new branch for a new release new tags are created and populated with content from the previous release and things are set into motion modules get their own build roots and their own tags for each build of the module that comes along which means that that's not something that we can ask RCM to do we can't ask a human to do that every time we want to do a new build of a module to create a new tag for us by hand so that module build service like I was talking about being an orchestration layer around Koji and Brew is something that automates the process of creating those tags and preparing the build roots in a way that is sensible for the build of that particular module this is a depiction of the container architecture that the modularity team has come up with to show how different modules map to different layers in containers as we intend to build them and ship them the base runtime module corresponds most closely with the base images that we've been building today for an F25 release or F26 but each individual module that we produce the intent is to build layer 2 images that correspond with install profiles that get defined by those modules so you may have one module that defines two different ways that it could be built into a container but those containers aren't the end product and ready to be used the particular use cases for them the configuration and the running of an application happen at a last step so we can special purpose them for different deployments and different needs but that the software from the module is provided as a reusable layer in the middle so let's talk about diskit this is work that we've been discussing for a while and the plan is to start it shortly after devconf the F26 freeze comes at the end of February we'll be deploying the module build service for that coming after that deployment is where I think things get really interesting and things really start to change for the distribution as a whole this is what branching looks like for any particular component in fedora today say it's python requests we have a branch for each release of the distro including master which maps to rawhide the way we want to change that is actually stop producing new branches that correspond with the release of the distro that's coming up but to instead create branches that correspond with a release of the upstream package the upstream project to which a component relates and this has some pretty interesting implications both challenges and benefits that come to the maintenance of software in the distribution just for comparison we intend to do the same thing with branches for internal components whereas today we would have branches for rel and layered products named as such that has a multiplicative effect where every combination of things we want to produce creates a new branch that has a new maintenance overhead for the entirety of the lifetime of the product the idea here is that by creating branches that correspond with upstream releases will have a much more maintainable way to support back porting security fixes onto those branches that isn't product specific and doesn't grow with the scale of ways that we want to end up delivering that component so the way that we intend for these to get used right is by here showing on the left two rpms python jango and python requests having a master branch and then having branches that correspond with upstream releases of those projects and on the right having a module that corresponds with a whole jango stack that would need to pull in the rpms for jango but also all of the dependencies that go into supporting that release of jango and making it something that's usable we could have a branch 1.9 of this jango module that declares a dependency on the stream declares a dependency on the branch 1.9 of the jango rpms and then also declares a dependency on the 2.12 branch of python requests and so long as 2.12 has lots of sub releases 2.12.4 2.12.5 the point is that all of those depending on the properties that are defined associated with that branch could carry those dot z releases in that branch and that module would consume them but not api breaking changes that come with a 2.13 release of python requests but a module or anything we have that is like an application that would need to sit on top of the base runtime can pick and choose the support streams of its components that need to go into it to fit its own purposes instead of having to choose from what's already available in the whole distro so there's lots to think about with a new branching structure for one are we going to have an f27 branch at all of any components when we get there we're not making this change for f26 itself but for f27 will we even have one and my thought is that no there won't be a point to have one unless there's whole subsections of fedora that aren't ready to make a jump to a new upstream mapped branching pattern having a fallback of an f27 branch makes sense for those cases until we're ready to at every kind of reach of the community move into that world EOLs become really interesting because right now when we branch for a new release every branch in a release has the same EOL for it it's about two years for fedora when that no longer becomes supported but if we break out the components to have their own EOLs that aren't associated with the release we need a way to define an EOL for that component and then propagate the EOLs of multiple components through to the modules that depend on them the thought here in fedora is that we'll use fedora to create a new branch and then we'll have to modify it to define new EOLs that correspond with those packages so when I say I'm going to create a new python request 2.12 branch I may say this has an EOL of only six months and any module that depends on that particular stream of python requests therefore has at maximum an EOL of its own of only six months and that our tooling that goes through and figures out when we need to retire branches currently assumes that branches are retired in mass when the release of the distribution is retired but now we'll need per component tools that go through and figure out when EOLs have expired and retirement needs to take place yet another question comes up is what's the economic force in fedora that goes through to create new branch requests currently that's done for us automatically at new release time when there's a mass branching for F27 or for F26 with branches like this it'll come up I think on a use case basis if there's a module that we decide that we want to ship a postgres module that needs a particular dependency on a particular stream if a suitable stream isn't found then we'll need a mechanism in package DB to request that new branch what will happen to components that don't receive any new branch requests it's an interesting way to identify components in the distribution that have existed only because they continue to be there but if no models are requesting new branches for them and all of the other thank you branches and their EOLs have expired at that point in time then there's no reason to keep them around anymore there's no force to keep their branches alive that is there lots to think about lots to do if there's any questions I'd be glad to take them so the names that I gave for the branches are like 2.12 and other oh I need to repeat the question nah I forget it right how do we deal with upstreams that either don't do releases or don't have semantic versioning there's some projects that use a year based release number and they just do like a time stamp release and they do one every day for instance and so my response would be that the examples that I gave were numeric semantic versioning ones but don't read too much into the the name of the branch the actual metadata associated with what the branch is what it's EOL will be what to expect from changes in it would actually be defined in metadata in packagedb yeah it depends on your use cases for them so how do you want to end up distributing it a nice way to do it might be to create two branches off the bat one called super stable and one called kinda stable and like those are arbitrary names that Tim made up but that's fine any module that wants to consume those streams just needs to know those branch names from you and depend on them and you decide the EOLs on how long you're willing to support that in Fedora right we would have different considerations for how long we're willing to support a maintenance branch of something internally but we have that flexibility now Adam yes that's what we need to think about well I would I would definitely call it a different kind of service it sounds like a corollary to like a TCMS something like that you know yeah so so yes to that when discussing this work in other contexts the idea came up to use policy engine to drive that same kind of work because if we know that in this context this context in this context the union of that is these tests are going to be needed well then really those are the only tests that need to be run one of the things that we we were you could write yeah in reverse but there's a downside to that in that one of the things that we thought would be a bright spot in this architecture is that you could add new tests to be run on every change that were superfluous that were experimental without breaking anything right now with the Rata tool internally if you want to add a new test to it you got to really be sure what you're doing because you're tying it right to the big behemoth yeah yeah it'd be nice to run tests that are silly yeah and you can learn from them and experiment with them when can we expect this in Fedora and rail production yes so the answer is different depending on which part of each of the three themes so on the measurements thing we won't be doing any of that in Fedora just for lack of time and resources on the CI stuff you will see a portion of that in Fedora first and another portion of that internally first and so the portion you'll see in Fedora first before F27 is the pull request based flow with an elementary loop is not let's move down the slide I'm going to refer to it in this one you'll see this in Fedora without this part really soon which is a pull request interface directly querying results DB which means it won't be as flexible it'll mean that all tests matter instead of being smart about which ones do so the MVP the minimum viable product is to do this in Fedora without the policy engine and we'll insert that afterwards the part that you'll see internally first is is actually this part but without this and without this it's about decoupling things that are sistered to the Arata tool right now that make it really slow and cumbersome to develop and so we want to break those things apart things like RPM diff and the other tools that they get run there right now does that answer the question great that's a good question I keep it there because I can't imagine a world without rawhide you could say that master is the only branch that has no EOL I don't know maybe we just kill it really whatever is latest or unless nobody wants 2.12 then 2.12 never gets created but yeah and something would have to have declared a dependency on master beforehand and then absorb a change that broke them and then that meant that they needed to request an old support branch on a new line I think I think to repeat the question into the microphone the question was what to do with the master branch in this diagram yes the Stevens point was that you only have to create a new branch if there's a breaking change but what constitutes a breaking change also gets determined by the confines of what you declared the branch would support you might have a branch that you say is your wild almost upstream branch that's special for your component there might be other things that expect to depend on something that produces breaking changes like that it's not necessarily a bad thing I'll move to other questions Thomas in the back you'll be next yes yes the question is with auto waivers why would you run any test automatically in the end anyways and part of it is repeat the question did I not do it I did I did I did it repeat it again the question was why have auto waivers if you have auto waivers why run the test in the first place if you have somewhere encoded the knowledge that you're going to automatically wave it no matter what the result is we have a lot of things that already exist in our infrastructure tests that get run anyways today and we don't necessarily have control over who decides when they get run and how and we also have defined today a lot of auto waiver rules already in place so it's about making sure we can retain backwards compatibility and actually make this change in production and still be able to do what we did yesterday while at the same time making ourselves flexible enough for tomorrow that's part of our requirements document the point was we certainly need to be aware of waving things for certain products or for certain modules but not for other products and other modules which is the case in the existing auto waving rules that we'll need to make sure we get retained because you spoke earlier Tim I'm going to go a misch on the side of the room the question was did I shave my beard because now I'm working on enterprise products and the real answer is I shave my beard because when it gets cold there's not very much sunlight so now I just need to shake things up so yes some bonuses don't you repeat the question and tell me if I got it right how do we convince QE teams with their existing workflows to provide the data that we would use to populate results DB convince them or force them how did we do one of those two things and the answer is that we did neither is that we identified initiatives that were already ongoing to publish some data from those different Jenkins masters and then we worked with those teams to modify the format of that message enough so it would be suitable for their needs as well as ours the most expensive part of that project would be going through all of the hundreds of Jenkins masters and one by one getting the team to enable publishing so it's about latching on to things that already existed like I say we currently have data from platform for that and we don't yet have the data but we have a plan to get data for all products right now with ShipShift is the initiative yes I plead the fifth is that the thing you say promoting scratch builds repeat the question is there a way to promote scratch builds in this pipeline so they can make it through the system and to be honest that's one of the major problems that we have with NVR uniqueness in Koji and Brew in this situation and promoting scratch builds is one of seven options that we identified to solve that problem but we haven't landed on which solution we're going to use for that one of the competing ideas is to use namespaces in Brew which is a feature that doesn't exist today but is in the Koji 2.0 docs to allow a null namespace where we could rebuild the same NVR multiple times but it wouldn't conflict with previous versions of that same NVR it is a problem Steph yes the question is are they stored are they associated with a package with a module with both and are they versioned with the software in the case in both Fedora as well as internally they're stored in DistGit not in the same repo where the software is stored but they're stored in a repo of the same name under a different directory so we've had discussions about this in the past that aren't in the same repo and it may be resolved that they wind up in the same repo but today they're in different repos to the second question we have namespaces available and capacity to execute tests that are associated both with the RPM or with the module and to the fourth question yes they are versioned along with those things in Fedora the branching structure for those test repos corresponds exactly to the branching structure for the RPM and modules repos as well as the ACLs so anyone who has access to modify the software also by default has access to modify the tests Amanda so the module integration testing gets kind of to that point where oh yeah I gotta do it what's the expectation for module integration and module testing and I think that it's the same that we would have on RPM and RPM integration today even though we don't have a good backstory we're actually doing that testing of combinations of RPMs so in order to test a module that declares dependencies on other modules we would define tests in this diskit repo that corresponds with that module to test that module and by exercising the API of that module it's assumed we would be testing the other modules that depend on it as well but this yes exactly and on streams of modules as well we only showed a module depending on RPMs modules can also depend on other modules yes that's a good point the question is how are we going to deal with user expectations if the release of Fedora previously had a unified EOL of 13 months but the components that go into it have willy-nilly branches with EOLs of their own making is that correct rephrasing yeah so my response would be that we're going to model the core Fedora release core is a loaded word with Fedora but the base release of Fedora the center Fedora server would be modeled as a module that depends on other modules which depend on the RPMs going into them and for any module its EOL would be derived it would be derived from the floor or the minimum of all the EOLs that go into it so if we go to create Fedora 29 and we look at all the RPM branches that are going into it and one of them is going to go EOL three days after release the tooling will have to alert and say Fedora 29 when we release it is going to go EOL three days after we release it and that's the prompt awesome works done that's the prompt to go to that component and say to the maintainer can you create a new branch with another 13 months of EOL and let's say there are community members like listen I don't have time to support a branch of this for that long then we need someone from the community to be able to step in and create that branch and support it to support the release the point is that there will be this disconnect between things that are in the core and things that are on the outside and I just say that that would be the case and it's a change in user experience for this and I think it maps to the rings discussions that's been going on for years now for the room and for the recording the point is that the user experience will be much worse and I think that's a conversation to keep having outside the room we're out of time for now to have any more discussion here so thank you