 So welcome again to DevCon CZ 2021. I am Iri Daniek and I'm the moderator. So now we are going to have the Federal CI SIG and the meetup. I will now hide my video and leave you to it. I will be approving other people who want to join and have a good time. Also the Discord link there posted if anybody would be trying to use Discord for conversation just very much. Yeah, I'm also adding the Everpad which is currently empty, but if someone wants to add notes or like topics there, so let's maybe use it to coordinate and see how it goes. So let me see how large is the audience. Okay, we have 17 people in the room. And my request is like to everyone in the room if you want to participate, please join in. We can give you a voice and we can hear you talking. If you have questions, please post them in the chat or in the Everpad or in Discord. And Miroslav, can I ask you to watch after the Discord channel if you are there and see if there are the questions which we need to discuss. So let's set the rules. The main, I think the top priority is to answer the questions from the audience right now. So if you have something, don't hesitate to ask and bring it in so it's more of meetup for people who are maybe new to the topic of the Fedora CI and like we want to share as much as we can and use this interactive session to help you understand what's going on. The second priority is like to talk about your current status and to talk about our plans for the nearest future. And we have in the room, me, I'm Alexander Fodorova. I'm from Fedora CI SIG, but I also a CI engineer in Red Hat and I work on CI for Fedora Centro Stream & Rail. We have Miroslav Vatkerty. He's maintaining the testing farm service which is providing the test as a service for all of our CI systems. And Miroslav, maybe you can, you want to introduce yourself more. I think we are creating a testing system as a service which is basically shared between various users on our user, but then we have internal users in Rail and also packet is our other user. And then we are trying to integrate it with more users. Fabian, we are trying to integrate it Zool which didn't yet happen, but yeah, we are providing a testing system as a service. And yeah, I've worked for CI for a long time involved in various pieces all around the place. And we also have Fabian who is representing the Zool CI and providing a Zool CI as a service for Fedora and Fabian, do you want to introduce yourself? Yeah, sure, thank you, Alexandra. Yeah, I'm working in the OpenStack production chain team. So we are maintaining a Zool deployment, especially for the RDO distribution. But we also use the same system to provide CI for Fedora, for the LISGIT especially. So this is more an additional system that you can obtain if you want, well, I think that's for presentation. Okay, do we have any questions so far or we are going to our topics things and I will just post them into the Averpad right now if I find it. So do we have anyone in the audience who is Fedora package maintainer and who works with the CI? Anyone in any of the chats which are connected to this session? Okay, awesome, we have people which are active. Good to know. Okay, let me find the tabs because I'm completely lost right now. So the topics which we want to cover, I think we can first maybe start with the Zool update because I know Fabian, you have a lot of new interesting stuff happening and there was just a mail to Fedora developed recently about the update. So can you give us a brief reminder how it's going? Yesterday we had a presentation about the update. So for those that wasn't there, I can recap a bit. So it's now easier to obtain for the Fedora Zool CI default jobs for this Git. So previously it was needed to do some settings in the this Git repository. It's no longer needed because now we use, so we trigger the jobs based on Fedora messaging. So it's really easier. And even thanks to that, we are even able to add some bunch of repositories. So we have like a tool that is able to define which this Git receives, pull request. And thanks to that, we are able to say, okay, this Git, we can add it to Zool because it will receive CI. So it makes sense. So we did that a couple of times and now we are, I think we are around 500 this Git that we provide jobs. And we think we will continue to do that. If there is like, if people are happy about the CI we provide and if it makes sense, we will continue to add more to Fedora Zool CI. But it will be also based on our compute capacity. So for the moment, it's quite fine. It's not a really huge compute needed. So we can add more. So I think that's also good news. We now have a support for a GitLab in Zool. So if the move to Fedora CI to GitLab happen, it seems that we will be able to switch quite easily to GitLab because the jobs that are generic are a bit different. So we can add more. It's quite easy to GitLab because the jobs that are generic are platform-independent. This is just a driver we will use that we are going to switch from Pager to GitLab. Yeah, so feel free to ask if you have questions. If not, I'll continue on the... Maybe a comment here like whether or not Fedora will switch to GitLab. I think it's still an open question. The more support we have, the better because we can also then consider using Zool for some kind of source Git workflows or maybe even send to a stream workflows. We'll look into that. Whether or not this will be the story for Fedora, we'll see. I know Neil is on the call. He probably doesn't want to hear about Pagore deprecation. So let's not talk about it yet. By the way, Neil, do you want to join the call? If you do, let us know. I just woke up and the first sentence I hear is GitLab... Fedora is trying to switch to GitLab. It's like, that's a depressing thing to wake up to. Let's not go into that direction. Something that after hearing you guys talk about the update to Fedora CI, something that I was kind of interested in seeing is how straightforward would this be for other projects also leverage the same technology and the same practices and the same structure that you have put together for Fedora CI for other things because in OpenSUSA, we've deployed a Pagore instance code that OpenSUSA.org and we are trying to figure out what CI infrastructure we're going to use for it. And there's been a general push to start pulling our packaging out of the custom version control system that the build system that SUSE uses has into Git so that people can work with it with Git version control and do things like fork it, branch it, test it all that sort of stuff and having this like... One of the main advantages of the OpenSUSA build service is that it automatically does all this depth testing and depth checking and all that kind of stuff and not having that facility implemented somehow in CI in Git is a massive downgrade and it sounds like what you guys have with Fedora CI for Fedora CI can actually do this and so how easy it is to replicate? Maybe before Fabian would answer for the Zool part I want to clarify one thing like Zool CI, Fedora CI is it the same thing or is it different things and where are we? So Fedora CI CIG is a joint group which is supposed to be a good place for all kinds of CI things so if you for example tomorrow created your own CI engine and you want to try it and run it for Fedora come to Fedora CI CIG and we will be happy to talk to you and like to see if you need some infrastructure help and so on to try your own CI solutions so this is one direction where Fedora CI CIG is kind of place for experimenting with CI Then there are two things which we also do under the umbrella of the Fedora CI CIG is like pull request testing for Fedora for both source and packages for both normal packages and RPM packages and the gating and test also for this gating Yeah and test and also the gating the generic test and the gating framework and these two parts operate differently when we work with pull requests this is the strong presence of Zool because Zool knows how to handle pull requests but when we talk about gating there is the Jenkins and the Jenkins pipelines which we run which internally they use almost the same content but there are two different subsystems so when we talk about like porting Fedora CI work to some other places we may consider a pull request part or we may consider a gating part and they are both interesting So now I will give it to Fabian to talk more maybe about possibility to apply Zool open source in a way Yes so Zool is the platform we use is called software factory inside software factory we have Zool and Notepool So Notepool provide test a node test a container for Zool to execute jobs on So Zool is completely open source Notepool as well is developed by the open dev community they provide container to deploy Zool and Notepool So this is one way to deploy Zool and Notepool using the package provided by the open dev community but also you can use the package we provide through the software factory distribution or now we have a package inside Fedora for Zool and Notepool So this is also a way to deploy it Then after we have the jobs we have created for Fedora that are I think generic so there is a repository of studentpiger.io that is called Zool Distro Jobs that is a library of jobs designed for Zool so this is essentially uncivil playbooks and role so this can be reused very easily so there is automation and role to build an open code G to build SRPM copper to run build and copper so different kind of things that can be reused as Zool Jobs and then after we have Zool Jobs that use this library of roles that is called Zool Distro Job so this can, this I think is quite generic and there is, I don't know how in OpenSuzi how it's working actually if there is a code G or not but if there is, this is just well but if there was, it was just a matter of change the settings and that, yeah after Zool can be just a matter of designing your own jobs it's going to be uncivil playbooks and all so I think I feel like there could be a point of collaboration on this Distro Jobs setup because yes we have code G and copper but I think since it's uncivil playbook and stuff then you can kind of generalize it and replace maybe some parts with like cold build system to give me an RPM package which may be different real implementation for different settings so there is some path here for to research I think and as I said Zool is very powerful when it comes to working with Gitforges and with pull request workflow so Paguri support it GitLab is supported, I think Garrett is supported even though you don't like Garrett I know and I think what about GitHub, do you have a GitHub driver as well? In Zool, yes It's really cool from that point of view when you need a close integration of testing with pull request and so on so I'm really a huge fan of this so from who likes Garrett here basically only Alexandra but I'm not going to denigrate Garrett as it's place, it's better than email and that part I'm okay with that said I'm actually quite excited that Zool can handle multiple drivers at the same time because in OpenSuzanland we've only recently started with code.OpenSuzan.org we have a lot of stuff that's on GitHub and there is obviously some path where we want to transition projects from there but we may always have projects that are on GitHub for various reasons multiple namespaces that sort of thing what we what we want to actually have is a lot of what you're doing with the Fedora CI stuff where people can take a thing, a code project or a package or whatever and then plug that into the distribution test cycle and see how that fits before ever having to release it into the distribution in the first place. Now we have some of this already with the pipeline with the OpenSuzan build service with OpenQA and some of the staging workflows and stuff but that requires first attempting to submit in the first place and that's just kind of a crappy workflow if you are trying to make something that you aren't even sure you want to submit yet but you want to test it as if it's going to go into the distribution and a lot of what you guys are talking about seems to make it possible to even have that pre-stage testing possible like nothing has to be submitted before into the distribution but can still be tested against as if it was like I've seen this with some of the PR stuff that's been done in Fedora with some Fedora packages where somebody saves a pull request and it tests it and it runs CI test as if it's being plugged into the distribution basically pre-stage pre-stage testing as if it's going in and that's the kind of bit that I'm really interested in having because I think that's quite valuable and being able to maybe leverage the same kind of test stuff that you guys are doing would be super cool maybe the the thing about Koji versus OBS I don't know exactly how the Koji test actually works but if it's possible we can generalize it and write an OBS back and then that might even be useful for me in the general case because some stuff even in the Fedora if I'm running on Pegor IO I may want to actually test on both Fedora and OpenSUSA and being able to have those same kinds of things would be super cool I think we need to talk more here about this pre-stage and post-stage testing because the current issue which we have when using Koji is that we cannot do pre-stage testing on a build artifact and then merge that artifact without rebuilding you know like when you work on the source level as a normal package developer code developer of application on the Github you have a commit you test it and then you merge it you don't change what you test it you merge it as is the code in the master branch but when you work with packages what we do currently like you create a pull request and then we create a scratch build out of that pull request and then we test the scratch build but what we cannot do is like once we tested the scratch build to merge it into the main Fedora Rohite because scratch builds are not mergeable so what we do is we discard the scratch build which was just tested we merge stuff we built in totally new binary package in the different build route environment and then we test it again because we may have broken all the things while we were like testing it and then this package goes into the gating then that's why we like run additional set of gating tests on that package and maybe on the build group of packages and then we get it through to the final Fedora Rohite so this kind of scratch build thing is really breaks the flow of testing of a cleaner testing for this whole system right so when you try to adapt these solutions to other build systems as you say it may be better in RBS but I also really think that we should review how the CoreG works with NVRs and build IDs in there then we will have a cleaner workflow on the Fedora CI side as well in the what you're talking about the reason why I said this first thing I was slipping into how OBS tends to handle this which is every build gets a unique every build gets an identifier but also every build is checksum and so if something is identical both times it doesn't care it's not gonna it doesn't trigger rebuild cycles for no particular reason and so it tends to be somewhat intelligent about this so if you build an overlay for example with a bunch of packages put together and then that overlay is tested to be good and you merge it unless the project is configured to rebuild everything all willy-nilly it won't so that you tend to be able to use the same artifact as it's being merged in we currently have a kind of overlay notion by a dynamic site tag thing which is technically like the pull request to on the level of binary packages right so when we do gating for multi-package builds we actually create a site tag populated with needed packages tested as a thing and then the site tag we can merge without rebuilding it again right we will merge it as a thing this is a promise interaction I think for Fedora CI if we learn how to update site tags easily on change without the need of version bumps of everything and then this thing then we can bring a site tag thing and then instead of creating scratch builds for every pull request we will create site tag for every pull request and then we will be doing fast forward merges on this thing without rebuilds and we can close the gap between pull request testing and post merge testing right that's absolutely a fantastic path forward I think what Pierre has got the auto-release stuff that makes it so that the release field is dynamically generated based on the history of Koji builds and if what you're talking about is what you want to get towards I think that sounds like a path to help make that become a thing unless I'm misunderstanding something here yeah if we can do an nvr id for Koji without updated incremented every time we built a new package in Koji without changing resource code then we can kind of rebase the pull request you get all the new builds and you don't need to manually bump release versions or whatever you just update the package site tag is updated and then if it fails by some reason it's the wrong order you again you build it properly and then you merge it so I think this is what I'm worried about Pierre's change is that it's kind of bundles a lot of stuff because there is a change log generation there is a built id thing and there is it feels like it can be separate but we can discuss it more on FedoraDevil I guess so I personally I'm personally only really interested in the auto release bumping stuff I don't particularly care for the change log thing but some people think the change log thing is so it's in there but I just really want the auto release stuff I don't really care about anything else the auto release stuff is the golden egg for me and I want that to make it through and I want us to eventually be at a point where that's just the default way we do things because I've seen how how much less of a burden that is because it opens soon as I'm spoiled that's how it works you don't get to control the release field OBS rewrites it for you but it's going to ignore it and it sets its own I would be careful here because you know I think release field is needed because we have three moving parts right we have moving part for sources and we version this part by the version of the the version field this is our sources attract by the version field then we have spec file and patches and everything which is in this git repo we version this part we release field right now and then the third part which is also moving is the build environment and we currently don't version this part so what you say is like move release field to track the build environment not the spec file but I'd say we need a release field to track a spec file and if I change the spec file it should be bump of a release field but I need the third number which will be tracking my build environment which most likely will be timestamp or some just ID or whatever because it's hard to version with built environment but it should be identifier and then I need all three I need a version, I need release and I need a build ID because we have three different purposes this is complicated but is that I like not having to think about what that number needs to be as long as it always goes up I don't care and that's kind of nice in the when I'm working in the open SUSE stuff because I just set the value to 0 and I just don't care how it goes up and it makes having to make changes to the packages a lot easier I don't have to think about whether I got the number wrong because a lot of people get that number wrong because they're manually doing it I think Podman got two epoch bumps because they messed up how to handle snapshots and that was not great okay I suggest we make a short pause here and I want to check any questions from the audience regarding Fidorsia in general, not just what we talked right now or should we move forward with some we have three more topics we may want to talk so I don't see questions in the current chat I don't know what's about Discord and so on so I will then go on with one more topic from the list let's go with FedoraCI Repos on GitHub and why it's now easier to add new pipelines I mean the topic I added, but I want to ask how to talk about it okay, hello everyone yeah, so I think Alexandra or maybe Miro, Alexandra mentioned that in FedoraCI we run Jenkins which is Jenkins is fine but it has its problems but one nice feature in Jenkins is that you can point Jenkins to a GitHub organization for example, or a GitHub organization it doesn't matter, unfortunately not Pagure Pagure is not supported and it can monitor new repositories and if it finds Jenkins file instance automatically create that job for you and it will start running and you can play with it you can also open pull requests and it will also discover those pull requests and if the pull request is not removing the Jenkins file it will basically create another job in that Jenkins instance for you so we can play with it and try different things and basically iterate quickly without needs to have Jenkins locally or anything like that so basically if you would like to add new tests to FedoraCI you can clone any of the repositories that we have there I'm making it sound super simple but it's more complicated of course but basically you clone you clone any of the repositories make your changes ideally your test is in a container so you update the container reference that we have there you update the ammo file that is there FMF file and you just let Jenkins to create the job for you and let it run on Fedora updates as people create them yeah it's not like in reality of course it's not all that simple but I think it's a nice feature and I'm not sure if it is what I maybe wanted to add is like previously we had Jenkins libraries I think we had even hierarchy of like free Jenkins the groovy libraries depending on each other which provided some functions which we then used in Jenkins pipeline and I think it was really hard for a newcomer who just come to FedoraCI to understand where exactly do you need to make a change to like improve the pipeline or improve vlogging or do anything nicer to this infrastructure so what I think important in this upgrade which we did by moving to GitLab we not just moved the code we also refactored the approach and the structure is now nicer so we have one still one groovy library which is also there on GitLab which provides functions like map my build to my code GID or something or render my CI message in a readable form so things like that and we have in the pipelines we kind of have a nicer structure now because we have metadata of the pipeline some helpful function which is mostly rubbed in this groovy function and then you just have a call for a script which is usually called Python script or you call Bash script or whatever script and this script does the magic of testing so if you want to bring your new CI pipeline to our system you basically need to copy paste the Jenkins part and write your Bash script or Python script to handle the testing logic and you can if you know the TMT you can actually write a TMT request so you don't even run test locally you ask testing farm to run this test for you so I hope that this make it more accessible and so when you read these gating results and you read the logs and you see like I want to this log look nicer you just really can go to github send a pull request and change it in terms of minutes. Introduce some abstraction there basically people don't need to care about like how the results will be reported for example and whether they will like show up in body later it's all like it's all hidden in that library and if you copy paste that Jenkins file it will be there and like Alexandra said you modify some metadata at the very beginning there is like map key values at the beginning so the pipeline actually knows like few details about your test some name and test type and stuff like that but at the end of the day it's still groovy so so never mind groovy which you can copy paste I don't mind writing in any language as soon as I can copy paste and I find that current pipelines are possible to copy paste that's a key property okay any questions comments from anyone in the room I see Miroslav you maybe can talk more about the testing farm what testing farm does for Jenkins pipelines which we run yeah sure sorry I need to close my door because hopefully you don't hear any sound anyway so yeah so the testing farm is like a service that we are creating and it's used as the back end for running tests and lately we also implemented this nice html page with reports currently like in federal say pipelines we testing farms output is an exhumity that's what I'm trying to say the standard format for outputting tests and this exhumity is translated into html and that should be quite a reasonable outcome of the testing and people should be able to get to it from the bot interface directly with one click maybe I can show it but I don't sure if we show this because this is basically new from the last year right we didn't have this before can you see my screen yes cool let me it's low I will also load here maybe I can show github first so on github if I look on tmd project which has a lot of tmd tests so it's like a good for presentations if I look at the pull requests here I should mention that we support not only federal but also sentos 7 plus even sentos 6 will be supported for some cases even though it's low so here you can see the results from the testing farm so this is the packet integration so if I look at the results now we have with this one click you can get the reasonable html output of the tests it looks like this it's expanded by default I need to fix that but basically this is what you get as a test output so these are all the plans that have been run there on sentos 8 and you should be fairly easy should be able to discover like what further results of course everything passes so this is the first experience the second one is from bothe where I can again show some package I don't know which one yeah let me try again tmp maybe so the experience from bothe if I look at some roll height builds so in the tab of the automated tests there should be here link and this is a little bit that this is an old build so we removed the build after some time so this is two months old so those logs are actually here but let me try something else for podman you can expect build every day at least it's a good kind of game I'm not sure if the tests are not broken there I can try to check yeah those are cockpit that is also a good candidate a day ago let me try this right so this is the experience that from bothe with one click you get and this tests were waved here so you get an also this is also an html output but this is directly in jenki's but it's generated from the samings unit as testing provides right and you can get with one click to the logs from the cockpit that's sweet yeah so this is the current experience that we have at least well fairly a fairly simple but already usable yeah and there were some errors here so this is the things that I wanted to show that we didn't have last year um yeah and also the alexandra mentioned that all the generic tests are now also written in tmt and also that gives us quite a nice html output right so rpm inspect rpm depletion installability all have fairly nice output for investigation maybe I should have also shown that those were the other tests in the bothe interface this one is not here I'm not sure why right so these tests rpm depletion installability those also have the same experience you have the installability test which tries to install the packages try downgrade remove update right so very basic very same experience in this case and same goes with rpm inspect for example right so you have here all the rpm inspect checks that you can see that abit failure regression from the last one okay just sneak peek uh-huh in in future we would like to survey from Jenkins I think although I think it's like it's okay-ish if you know if it shows the test results it's worse if there is some infrastructure error like debugging like currently I think it's it's not possible for like for normal normal humans like not see eye people there is no chance currently so we have at least improved the error messages there lately so we try to return some reasonable error but yeah this is something that our operations team will need to do to monitor the errors currently we are fighting there a bug that is on row height and causes that something empty test just hang in pretty first step so that is something that is causing quite a lot of errors but yeah otherwise things look fairly stable so maybe I missed that part completely I was wondering Zul can run tmt test no I didn't finish it so we talked with Fabian before the Christmas but I have a task for it so we currently decided that we will directly from Zul jobs running to be a testing farm so currently that's how we want to have it so we integrated also with testing farm in that case and we see how it goes if that would be problematic then we can directly do the runner because tmt has a runner so we can do it directly from Zul jobs but currently the agreement with Fabian is that we will basically use the API so he has an API key on his side on Zul's side safely and we will be using that to contact testing farm to do the testing for us is it for like GIT tests that are defined in the repo or also for generic pipelines I guess for generic pipelines Zul CI has its own implementation for them I'm not sure they don't have the test that you have created like installability, rpm-depline maybe rpm-depline is run so that's questionable we can run it again via testing farm in the same way because basically that is just using as API I'm free for discussion there so that should also work running via testing farm the same way as Fedora CI does it or we can move it and have it running directly on Zul so in this case using the service Zul will be basically just a team wrapper it will be not doing the real heavy lifting that will be delegated to the testing farm but I'm free for discussion there we need probably to re-discuss about that more about it maybe I think it could be a better idea to use directly the test notes that are provided by Notepool so because I feel that we are going to put twice the load to testing farm for these geeks that you are already running jobs if we run jobs as well we will run the TMT jobs on testing farm twice and it's not going to be to be good so currently we use Fedora AWS hello Duncan on the call so really like currently like we definitely are like I don't have numbers I hope for the next year I will have some nice numbers I have a lot of capacity around us also testing farm how it's created that we want to be able to somehow move the workloads between infrastructure so currently like in upstream we have only AWS but we are planning to have an open shift cluster with QGIR support so that will be something that we can also use so we will we already also from redhead will have some infrastructure available that we can use so I wouldn't be afraid currently of that but yeah it's all discussable I'm thinking that we maybe want to de-duplicate indeed we run so like when we create a pull request we need to run this test only once and if the wall can cover the TMT base these tests we probably need to disable the Jenkins drop which we maintain for pull request to do the same thing so as soon as you get a certain way of running TMT driven tasks on the wall we just on board all packages to the tool pipelines and switch it off pull request and we'll keep it in gating. Yeah that was a question I don't know by Tom Steller that he had this question yesterday I think it was on the Zoom presentation and I remember that he was asking that why is there this duplicity because it's like for users a little bit confusing so yeah once we have TMT support there for Zoom definitely this Git pipeline is fine of course for the test namespace I think we will keep that because we have also the this Git pull request over test namespace so this is something that the QEs are using that they are sharing the test from REL to Fedora in the Git test namespace and there we also want to have a CI so maybe that part will pick up or will be there still I don't know but Fabien do you have also maybe plans for the test namespace or what do you think about that maybe he doesn't even know that there is a test namespace that's usually that's usually how it's so yeah so this is yeah I think I know what it is in SRC fedoraproject.org we have like a slash test yeah I think yes okay so we have so let me find it it's going to be a quick I think yeah I guess it is no problem so if we also support test namespace we can just like completely disable the FedoraCIP pipeline I think that's completely fine yes for test Python we have CI on that repository so we support it and we are able to run functional test is CI test if any from that repository for me it feels like it actually makes even more sounds to do it through Zool because then you can create dependent patches to test into RPM and try to test them together eventually as a group so I would be interested in developing in that direction so that instead of running two different systems we all converge to Zool so my ideal vision of the FedoraCIP on pull request is like Zool only we've just seen farm in the back end but driven through Zool then we can have across different projects in tests and namespaces where you can have a common idea of a testing wonder what do we need to do how much work it will take so Fabian if I currently say that my one pull request depends on the other then Zool will already handle that right so for example I submit a change to one RPM as a pull request like Libas Linux and then I submit a change to the DNF which depends on the pull request to the Linux so will Zool test these packages together already or not yet? Yes it does so with Zool there is a system of dependencies between the changes so it can be pull request or grid review so for a given change you can set multiple dependencies to other changes so it's going to be dependencies on different pull requests not merged yet and then we are going to to try to test that together so Zool facilitates a lot the work so in the Hansiwell inventory we have all the details we can have our jobs that react to that details dependency details so what we have implemented today is dependency support for runtime dependency so if you have a pull request on a package that depends on another pull request on another package you will have bus build in the test node so you are going to test the STI test with the dependent build into the same node but as of no unfortunately we don't have the support for the build requirement the build dependency mainly because we are using Koji and we are using scratch build and we cannot add a dependent build a dependent repository we have created previously into the mock root of the scratch build into Koji so this is really something that prevent us to provide the support so we are looking to use maybe copper so or maybe a local mock build on the test node so it's not clear yet or maybe side tag but it's depends about the API support into Koji to build to create side tag destroy side tag because we are going to create another side tag so there is a problem but with the side tags we have this NVR problem no side tags NVR problem as long as you create side tags and then not do real builds with them so you can use side tags populate them configure them and do scratch builds using them if you wanted to but there is a problem with side tags and that problem is you don't have a good way to configure repos outside of the nested hierarchy for your side tags so say you want to overlay another repository as an external source and plug that in together into your side tag that is not easily possible in Koji so you can only do a side tag that directly nest from tags that already exist and already configured if you want to have like a combination of you know let's say Fedora 34 build root and maybe the ELM thing and maybe like open stack or whatever like just throwing some random thing in the air like pulling all those together into a side tag is not straightforward and I'm not even sure it's even possible so there's no wide there's no way to what is it stitch together multiple things into a single side tag but if you're just directly saying I want to build off of F33 and I want to scratch build a bunch of things from there you can do that so side tags may actually help because you can also do things like configure extra mock flags in a Koji build so because you can configure those per tag and so you can do you can test for all the different architectures you can produce build artifacts that way through that but it isn't as flexible as I think it should be because it is missing the ability to do like disparate composition like side tags don't have all the functionality of a normal tag in Koji or at least I haven't seen that functionality exposed because I think that is exactly the system that is used for this so instead of actually has all this functionality that I'm suggesting like you can pull multiple projects in you can do like a wide integration very easily the only feature that copper is missing is the ability to configure macros and it used to have it and then they ripped it out and I would like them to put it back because it's very helpful to be able to define macros but that is the corner case a little bit it's a less of a corner case if we're talking about scratch builds the copper seem to be a better idea than all the regular Koji side tags it's kind of more flexible and it scales I think it scales better right now because it's also on Amazon it should get better like two days ago there were like 3k in the queue so we were waiting a little bit but they know about it so it's also on Amazon okay I want to follow up on the question from the chat from the ondry because we indeed discussed a lot of things and it may be not clear where like the package gets into this whole topic so if you're just starting to look into how to test your Fedora packages so the recommended way is to go and define TMT tests in your repository and we have a doc which I'm posting in here so TMT test is you put a file in the disget repository which says run this script basically and then or run this test framework there are more features to this but basically it's just a file which says run this thing to test my stuff and if you have TMT test then automatically you will get your gating test pipelines running this test on every both here update on every row hide update of your package so you will get it you don't need to do anything else for Zool support and nicer pull request testing you would need to add additional step right now I think so that you onboard your package to Zool but Fabien if I understand correctly you are now collecting all packages which have used pull request in the past and just bulk adding them to the Zool pipelines right? Yes this is what we are currently doing we are trying to check the repositories that have at least two pull request update in the last so last time we did it was last week and we check for the last three months so and we added around 200 this git there is quite a lot of this git in Fedora so it's I think we cannot under everything and so but as a packageer you can opt in if you want just we try to have more adoption so so we attach jobs to this git that way Yeah I see that possible improvement to the docs we have we need some docs like not focused on the tool but more like I'm a packageer I'm on tasks where my three steps to achieve that we need this page in Fedora docs to make more there needs to be some work there maybe it would be nice to coordinate it somehow I tried to get the Fedora QA guys to help there but it didn't work out so we will need to do it I think there is also some all the information there that is not relevant anymore Fabian what do you think of moving this Zool part of the doc also to Fedora CI doc because currently I think Zool part is in the Viki but we try to put CI docs also on the CI doc site so what do you think of moving that Viki page to ASCII doc format I don't where it is more convenient for you and well it's helpful for everyone else because right now everything is kind of scattered and pulling it into and for a while now Alexandra and a few others have been saying you know we have all the documentation on the Fedora CI docs page which doesn't have all the documentation and so it would be probably a good idea to just kind of centralize all that into the ASCII doc I have my feelings about ASCII doc but like putting it all in that place makes it so it's easily searchable easily located and people can easily reference it and changes that are made to it you know the team can actually see and make sure that they're good and keep it coherent because like one of the problems with the Viki one is that it loses its coherency over time because either they rot or other people just randomly update it with their own things and it just it starts making a lot less sense over time and the it's quite helpful to just have it in one place that is managed directly by the team who actually understands what's going on that's the reason why the packaging guidelines move because nobody could find anything and now we can find it all Yeah I think we follow that we just really need to put some additional effort to make it more usable we have a repository already it's an ASCII doc we have a pipeline which updates it and on the docs side so what we need is we need to migrate the whole part to this repository and then Fabian you would be able to work with this repository on your own as well like we need to add you the rights to it and and we need to create a better entry point for the new packaging to understand the situation and the difference between CI systems between gating and pull request testing and where to start It makes sense to dedicate one day maybe some meet up or some dojo or whatever that we would update and look at the docs together because I think that would help I would definitely join Good idea the only question is Yeah of course And if you think about it hard times okay maybe yeah maybe in March we can do something like this yeah by the way we have Fedora CI and Fedora CI sig meetings which are bi-weekly things and we recently I don't know if Jim Barr is on the call he is usually trying to organize them and lead them and he added the notification from Fedora calendar to CI mailing list so we now don't miss them and we actually know when it's going to happen and probably and I asked him also to send meeting notes to Fedora CI mailing list when the meeting happens so we kind of make it more visible also not to just participants but to everyone and it will be in the mailing list logs What about what is the start Fedora project IEO actually Fabian I'm not sure why what it is I need to check Can you do me? I don't know that this is not a project Project.own Oh Oh You know because I'm working with so far factory.io also all the time and it's ticked to me so I'm sorry It's Fedora Okay So Fedora documentation should already Fedora documentation should already kind of take you to the landing page for all this stuff but if you make a magazine blog post it will show up on there I don't know what else you would want to put there Is there the CI space? It isn't linked from the main documentation so if I click on the main Fedora documentation how do I get to CI I don't see it there I'll put it there I guess it's not landing page yet so probably we need to just change the landing page to add a little bit on Docks.io Where's my error code? I have already action items many of them so action item 1 add Fedora CI docs to landing page landing page action item 2 add Zool to Fedora CI docs action 3 is to add where to start a doc to to CI docs and maybe how is called fedora CI docs works hackathon we have a hipster name for it okay what else we talked about we still need TMT supporting Zool somehow figured out user ongoing tasks any other nodes I'm not sure how much time we have, we have 15 more minutes right I think so I don't know how much time you have this is way longer than everything else I've been in today than yesterday cool okay yeah, magazine article that's what actually our time is up it's it was supposed to end 4 minutes ago okay, we're good yeah, 5 minutes over warning so not before when I I started with action items right on time so I got 6 action items we need to we know what to do next and we'll probably close we have fedora cic we have fedora cic meetings mailing lists we have documentation and we welcome everyone to help us and like to come and talk to us I would like to say that now we have yes and we now have a break one hour or so so if you want to go over time it's perfectly fine I just didn't want to interact with you thank you for giving us the time but I will probably I think this is a good moment otherwise we'll give alexandra like 20 action items and that's probably not good yeah I think we already have enough thank you everyone for joining and thank you for the group thank you Neil for providing us the conversation topics so yeah see you on our talks and our meetups then enjoy the DEF CON bye