 Mä oon puhelunut. Miten olet ollut täällä joitakin, mutta kun olet täällä ensimmäisenä, meillä on tämä koko kokemuksena, jossa olemme valmiita teknikin tehtyä ajatelua ja muutakin ihmisiä, jotka ovat tietysti miten olemme tehdä, miten olemme operaat. Ja olet ehkä nähnyt joitakin, että tämä on jotain 4-5-täivänä, ja meillä on tullut kokemuksena. Ja tänään meillä on todella erittäin Siimo Tuomista, miten saamme software-deploymenti alta-siteen, mutta myös tietysti tämä taitaa FCCI via CVMFS. Siimo oli alta-sainetitikin komputingin. Se on basically our name, how we are operating the operations at Alta for us for quite long time and it's really good expert in both GPU-site and nowadays has been putting lots of effort for the spec deployment strategies and how do we are getting a software fast and isolated researchers and now I'm giving states to Siimo to go through the details how we are operating these things today. Please Siimo, go ahead. Yeah, thanks. Yeah, so lots of words there. Some of these stuff that I'm going to be presenting is on the move very on the move because in the winter we had a hardware failure on the build system and we had to start it up again and that was a good time to well evaluate what we have and adapt them and now we are rebuilding it to be even better. So a lot of these stuff is constantly on the move and constantly on the development but it's still like the same core is the same as in the old build system. But yeah, so what I'm going to be talking about today is I made some few slides. They're not complicated slides. I wanted to give this as simple as possible but like how we have software installations done in Alta, in the Alta scientific computing side with our CI build system. So in this talk I'll try to explain. Siimo, sorry to interrupt. Could you very briefly just go through how do you want to get the feedback and how do you want this hackMD to work? We are happy to answer but some people are probably not aware. Yeah, so if you go to the audience notes over here in the questions side you can post whatever questions and you can answer them yourself or you can wait for me to get to the end of the talk. I'll probably briefly look at the questions during the talk. I probably can't multitask or won't multitask them. So yeah, so if you want to post any questions here put them questions related to what I'm saying like if I'm talking gibberish or if you have questions that you would like answered on your site and asking basically how other sites have figured out these kind of problems. In my talk I will be going through from the problem solutions point of view of this like how the build system came about basically. So I think that is better way of describing the build system and these might arouse some ideas like basically like okay we have experienced this problem how do you solve it and it might be a good idea to post it here and let other people answer it or start a further discussion about like how the CEI system should operate. But yeah, so you can go here you can press the edit button you will get this kind of a view where it's like you can either put it only on the viewing mode or you can have this double view where you have the rendered one and here's the markdown code of the document so you can edit it freely. Please post questions here. Okay, so yeah about the presentation like I said I will try to go through like looking at the build system top down it looks really complicated but I wanted in this talk to make it a bit simpler so I will go from the bottom up and basically describe how it came to pass because of like it's trying to solve these specific problems so in a sense it's a very simple system build in a complicated system but it's complicated because it's like an onion so you have layers and layers on top of layers and in the inside it's very simple in the inside it's basically running commands but there's lots of onion layers on top of it and if I start to peel it from the top it starts to look complicated but there's more stuff coming constantly but if you start from the center it's much easier so what the goal was for the build system was to automate the boring work as much as possible without compromising the quality of the software that we built so basically like once you have run these kind of installation commands like X number of times you get really really fed up with it so we want to automate it and make it so that like it removes as much as the human participation as possible so that you don't make mistakes and stuff like that the current build system is about fifth iteration so previously we used ease build we used Jenkins now we have switch to SPAC and build bot so this current version has been in use for about two years but it's constantly evolving so well it's a thesis ship basically to you know where it begins and what was the original one but basically it's very like the core layers core build rules are very like we sometimes fix some bugs but they have been in use for like two years so how did this come to pass so basically the step one we want to build some software so we use SPAC to compile software and then we want to create anaconda environments for users to run Python stuff so for that we use miniconda and mamba so this mamba is a great tool besides the topic but I highly recommend you check it out it makes the installation a lot faster but we use those to create like Python environments for our users and then we use single legacy build to build containers for specialized software okay how do we do this let's look at the SPAC for the whole of the stuff here I will only focus on the SPAC thing but it's basically the similar kind of situation for the gonda thing and the singularity thing so when you want to install some software usually you download SPAC you get it for yourself and then you run SPAC install some software it's very simple it's very good software the SPAC suite so it's you just download it you activate it and then you run SPAC install and then you run MPI but this immediately when you try to make it into production or take it into production there's some problems so basically the greatest benefit compared to easy build and at the same time like a constant pain in the ass is that SPAC finds dependence dynamically so you can have different dependency resolutions so if you install let's say hd5 with MPI parallelization it creates a different package than if you create an hd5 without MPI parallelization so you get multiple versions of the same multiple versions of the same package quite easily and when you have multiple versions of all kinds of packages you end up in a situation where you have huge bunch of packages and it becomes problematic for users so we wanted to make it so the packages are there's only one open plus or something like that or if there's two versions it's clearly marked what is the features of these versions so how do we solve this so SPAC has these site configs that you can use to specify some default versions of variants and also you can specify module name standards, how the module names, how they are formed what kind of module name structure you want so these are very good, you can already get much cleaner installation if you say to it that for everything that requires a compiler you use certain versions of GCC or open MPI so you can set these providers so what if you want to install the software with certain optimizations and certain compilers well you can specify like these compilers in SPAC like you use for example GCC and you can specify the architecture where you want to build so for us the architecture now is as well so that is the like greatest common divisor of the system or like least I don't know the minimum CPU architecture that we have so we built for that so it runs on all of the nodes so you can specify these but sometimes SPAC doesn't propagate these architecture optimization flags to builds so there's some cases where it's like there's some mistakes in some package configurations and then it doesn't propagate all through and then you get code that gives seg faults on some nodes well there is a solution for this so you can specify these compiler options and default compilers in these SPAC Linux compilers in the home folder of the user okay so now you get like a consistent architecture optimized software suite okay now repeat it hundred times and this is where like you get it gets really boring and really annoying like basically when you want to run a consistent installation of like a bunch of software you need to run all of these kinds of complicated commands one by one and that's something that well at least I don't find interesting in this kind of job so what you have to remember lots of like stuff you have to remember to put the configuration to the right page you have to have to make certain that okay what command did I run before did I run what is happening so we made these build rules so basically they are like kind of a script kind of a thing they could be written in some other language some other way but in essence they are that basically based on this like minimal configuration like we have this build config so basically we have this one YAML file it has like internal logic where from there the build rules will create these SPAC install commands basically so you don't have to always specify the architecture the compiler so forth it will automatically fill the blanks basically based on this YAML file so we just give it the YAML file and we say that okay here's the package that we want to install and then we put bunch of these packages maybe if we want to install a compiler we put it into a specific compiler section so like it will know that to activate it as a compiler and then it basically runs the same commands all throughout and at the end it will recreate the recreate the modules to check that there's no like conflict in module names or something like that and then it will deploy it with addressing and then we put all of these configurations and the SPAC site configs that we used to set the defaults like the default route that SPAC should take and we put them into this signed build config repository so we have this one repository is basically like the verb on how to do it and here is basically like what we want to be done and then this is basically static we don't if there's a bug we or if we want to change the installation logic in some sense we adapt this but most of the time we just update this second one and we don't care about the first one so we just update the configuration what we want to be done we don't care about how it's done because that's already defined so it's basically like you could write this in Makefile or Snakemake or whatever but it has some additional features and those features are used for the other builds as well so they're like class structures and stuff like that the same kind of logic is used for the other builders as well so it makes it a bit easier okay but this is not CI yet so this is not continuous integration this is like creating consistent builds but now we want to do it automatically so we don't have to do it like we don't want to run the build rules ourselves constantly because there can be problems with that so why did we move to the direction we currently have so here's some things that happened so first is ownership troubles I think there's a hot microphone somewhere in the background I think you have a microphone open you can hear talk yeah so there was previously we installed stuff with easy build and manually and stuff like that and there's always going to be a situation where the builder tries to do something and the files are owned by the other admin and then it fails because it can't write to a folder it can't read a folder, it can't do some modifications it can't do something so if other means run the commands there will be problems because well there's going to be ownership clashes so the problem is that the builds can fail and also I personally previously when we run the easy build setup everybody had basically their own configurations that they have set at some point at the home folder and forgot about and then when they did the build nobody could get the same build because everybody had some special settings set in their home folder and this is something that might affect the build and of course this is not something that we want so we created a user that runs the build rules for us so as far as it's the strident CER user so it's basically like a machine user that nobody actually logins except for doing these builds okay the builds can be heavy so we don't want to run them on the login node so if other builds they can take like 2-3 hours we don't want to put the load to the cluster itself so there's also the problem that they can create a huge number of temporary files and they can use quite a bit of memory so it's better to run them on top of an SSD drive somewhere so the solution we run it on a separate machine for us it's simple like a power workstation with a few SSDs and a hard disk for the eventual products for like saving the end products so now the build has been moved out of the system but we still want to build for our current environment and maybe we want to run for the operating system like the other operating system like our workstation so it should be the build environment should be as similar as the actual image that is running in the like OSPC machine in Triton or the workstation that we have in our workstation so basically we move it into containers so now we run the build rules in containers in docker containers so that the libraries and the mount points would be as similar as possible so we build configurations in these docker containers that are built well to be as similar to the end place as we want so we currently have to build places or targets we built on so our Ubuntu 20.04 workstations and then the cluster centers 7.9 the OpenHPC version they are like minimal images they don't have anything in there except like compilers and maybe some stuff like that and some stuff to get the CI working and we run as the user the Triton CI user inside the images but we still don't run this stuff automatically we now have like a setup to run the different builds but we don't do it automatically so we want them to happen automatically when we update the configurations so for this we use the build bot so build bot is like a Python framework so it's similar to Jenkins but it's Python so it's actually easier to code compared to Jenkins which is Java and XML like it was horrible so at least to me when we want to adapt the build system but that basically runs the builds in the Docker containers and it gets the information from the GitHub so whenever we push stuff to Git we get like a there's a hook in the in Git that sends a message to build bot to start the build and it will pull the repositories and run the builds in the correct builder so it will choose to build the way it wants to run and then it will based on the Git changes so what files have changed it will run those builders where the files have changed the configuration files have changed and then we want to put this all of this together so that we don't have to like manage the builder ourselves so there's also a CI builder in the build rules itself so the build rules can set up the builder that uses the build rules so this is where it gets like you get this self-eating snake kind of a situation where it gets quite complicated maybe to visualize but so basically we have a builder that can set up this the CI system and the end product is basically like a folder with a Docker compose file that we say that okay get this system running and then it will run the system basically so there's this server running okay this has been a lot of stuff here so I'll demo it quickly I'll try to demo how we install packages I'll stop the share and I'll take the whole desktop so that it's easier to show okay so you can see probably my desktop so let's go to actually I can't do it here just a second I don't have the correct proxy enabled there just a second get me another window yeah, I have to proxy the connection so we go to CI builder so this is the server so this is basically a workstation that runs the build it takes on like at the front to give access to the build and over here we have we can see it like the status so 20 recent builds so recently there's been Manaconda builds and some spec builds and we see a list here and then we can look at the builders for example I see that there's like different builders Mikko mentioned about the CVMFS builder that's not currently operational because of the hardware failure but I'm in the process of bringing them up myself but okay let's look at like what would we want to do if we want to install a package so what we have here on the left side are the science build configurations this repo is in GitHub over here and there's these configs folder and for these configs there are some builders are still work in progress but they are like build configurations for different builders so for example we have these spec builders and then we have these Anaconda builders and there's over here for example in the dev branch we have like the site configurations so these config jamblin, moduli jamblin, package jamblin basically packets, specs, like site configurations this is like how it should deploy the stuff in our case we use autosync and over here it's like the main configuration so this main configuration looks like this so this is the target architecture that it will try to force for every package here we have some compilers specified some of them are system compilers most of them are installed compilers we can specify some extra flags for the compilers so here we force basically to make the compiler for the as well architecture and then we have a list of packages that we want to install so let's try installing a new package here so I have spec on my system I have spec enabled here so let's say we want to install a new version of emacs so I run spec in for emacs, I see that there's a new version here 27.1 and we want to install that so I open this is my local copy of the repository I open here spec build config here open the folds, go to the end copy the two lines put here emacs sorry about the colouring of the font if you can see but it basically says emacs 27.1 and then I see that I have changed the file basically added these two lines and what I do so we have this kind of a structure where we have two dev build and then we have the final build so usually we just push stuff to the dev build but when it comes to when we want to join the actual build that users see we made a pool of requests and then some other admin can click it through to check that it's like correctly it installs the package and it looks it's tested and it looks good so here I mentioned that I'm testing emacs so this is spec versioning kind of scheme so that sign emacs I run git push this file should here change maybe it doesn't change yeah you see that there's like emacs is added here and now if we go to the build build the ski builder we see that there's a build started so we'll look into it and it gives a lot of output so what it does basically it syncs the configurations over here and then it tries to tell us what it will do so this describes step it can be a bit hard to read because there's a lot of stuff here but in essence what it does is that it basically runs we focus on one of these commands for example this last one spec config scope so this might be the font might be a bit small so what it does here is it runs spec config scope and then it gives the path to the configuration file install verbose emacs and put the architecture there and so basically it runs these like spec install commands it just runs them but it basically feels all of that you don't have to remember so it feels they automatically the architecture it uses the correct configuration it starts from basically scratch and it will run it after it has run the rest of the commands so there's bound to be less problems with the rest of the software and over here in the other section we see that it's actually running the commands it will take a minute to run the stuff so this is how we basically do the build so if this fails of course then in that case we need to usually go to the image itself to figure out or the server itself to figure out what's the problem and we can go there so here is the ski builder we can go there and we can run this worker shell and we get into the image and over here we can like run our own commands or we can uninstall the failed packages whatever like spec there's some not so pretty things that for example you need to run this spec builder alias instead of this spec command but basically you can see there that the end product is well lots of software and then this software is copied to our systems and then we test it out we run some test examples on it and then if it works then we put it into the teneck and we ship it so this doesn't like absolve the admins of if the build fails you can't just like it doesn't solve the problems for you figure out how to get the build working if it fails it fails but it's better that it fails harshly and not this kind of a sudden unsuspecting way on the background and then you don't know what went wrong and then people get some software that doesn't work or something like that so it's better that it just fails and says to you I crap my pants please clean up while we're waiting for this to finish maybe I'll mention a few problems with the system that we currently have because I want to be honest with the system it's not perfect so the deployment of the CEI is not good that's something that we're working on so basically there's lots of hidden dependencies that you need from the system you need to create some folders to set up the build environment and that's not something that is actually so by the way now it's actually running the build so now you see that there's some Z-configure output coming here and then you can do the previous checks and then it continues but yeah so the CEI build isn't good so we are trying to figure it out so we can translate the CEI builder that we currently have into our own simple role that would set up the build system so that would be much better so that then you could run it on any system and it would be easier to deploy for other sites and sometimes and the other problem with SPAC build specifically is that sometimes you get these build avalanches so we run from our own fork of SPAC with like yeah our own fork of SPAC and that's because like the upstream updates so constantly that if some dependency is updated it will cause build avalanche where it basically like plows through the whole system so like it's been useful so something updates then tries to compile a new compiler then it tries to compile the next software and then you end up with a completed new software stack so what we currently do is we just remove basically everything that conflicts and we just reinstall it after we have done like sync with the upstream but in the future we probably will try to have this kind of a two-step process where we build like this more stable like base operating system like base build that has like certain libraries that are like set to in stone and then we have another builder that is moving faster and will build these end products using the other like slower build but this is something that we'll have to check in the future other problem is that the science build rules it's scripty and it's structure is not probably very clear so it might be easier it could be it could be written in a better way so that it's easier to read and what's the like build logic how does it go about doing the stuff it does so that could be that could be updated on and made better so so that it's easier to easier to take into well easier to read and easier to understand what it does and I'm not certain if the build configuration is like helpful or not because it can be hard to at the same time it's nice that it feels all the blanks but sometimes it can like yeah it can it can hide the details and then you don't necessarily know how it relates to the spec underneath it because it's constantly moving and there's lots of moving paths documentation is of course in these kinds of projects it's not up to date so that's something that really needs to be improved and like we have now much more people using the system previously it was basically me who was running stuff through it but now we have multiple of our admins running through builds through it so we get constant feedback on what's bad about the documentation and then we're updating it so okay the build finished so what it does here basically update fullscreen unless zoom doesn't want me to yeah so what it did here so there's some 1500 lines so it run through the previous installations lots of them it checks that all of the like packages are installed and over here it starts a new build so you can see here that it runs this line so over here it runs back install emacs it's quite small sorry about the font but basically it runs back install emacs then it goes through here and then it says that it's just installing emacs and then it basically runs the spec build runs through it lots of make stuff and at the end it auto syncs the stuff so here it brings what it auto syncs to to our cluster and then we can go to to our cluster and we have this separate module path that we can activate to see that oh I've activated the Anaconda but yeah so this yes this path so you see it's the FGCR sender 7 as well dev previously we had emacs 26.2 in this branch hopefully we have emacs 2 versions of emacs now so we have 2 versions here so now we have this I can show that it comes from the dev branch so over here it comes from the dev branch I'll load it I will have to kill the window because I don't know how to exit emacs I will have to kill the window because I don't know how to exit emacs so yeah that's about it do you have any questions do you want to ask by word so should I look at the what's here okay that's a good question how to handle software that is not in spec so there's few options like as I mentioned here that you can you can install of course software outside of it like we installed matlab and matamatic and stuff like that that has like some installer we installed outside of it but also we also write new spec packages they're not very hard to write so basically well my guy can probably tell you was it hard to write like he wrote one of these packages no it was not hard to write it all it's just another built system thing to learn but it's spec is very well documented so if you spend a little time reading the manual and of course looking at also the benefit of that everything is in this repo is that you can very quickly find a similar package and just look at how the spec file was written for that package and then you can base of that for example if you have an example like this I think you can quickly base your own so this was written by Marajan did you make a peer to the official upstream yet? yeah I did but I haven't checked up on it lately I think it's mostly ignored actually okay I guess at some point we'll look into it but yeah basically you specify some versions and some dependencies and maybe some variants configuration structures are going there and it supports most of the basic like auto tools and c-makes and stuff like that and it has all kinds of helper functions so it's really easy to and then like the Anaconda stuff and that stuff is basically like there's similar kind of stuff we have the build configuration and then it runs like similar kind of commands and it sanitizes the environment so that the condo environment is not bad but that's another story yeah time scale of these steps yeah I'd say that yeah we previously had a single equity builder as well it was working and it was pushing stuff to Puhti and also Anna but then well the recent problem can create a yeah we're trying to take the cdm as a stuff back into action yes that's also like yeah at the same time like currently we're at the problem where there are actually highest priorities get rid of the old software because we have so much of this legacy software lying around because we are constantly getting rid of this thesis kind of system so we have lots of software so we are currently creating automations to deprecate our software because like no human wants to like write the files to deprecate the software so we are creating automations based on usage statistics to remove this but it's another story but yeah like we're trying to get as much much of these features back online so how do you know when the software is updated well normally we ask the users the users will ask us and then we'll update the version basically our goal is to make it easy build has this releases so it has these releases like every half a year they build the whole software stack again with the most recent software but it's not fast enough for our users in practice it wasn't like they were constantly like when we previously used to use it it was constantly like this kind of a feeling around with the version numbers because some user wanted some new version then you get like a million versions of the same software and we still are creating well benefits of that situation because we have like a million versions of different software installed via easy build so that's not we have only like hopefully only few versions with this back installation so if you're interested about this hit me up so that we can get other people running this as well or getting the software to their systems I really want to provide them via the CBM of us but it's like there's a lot of usually end up so once you have finished with the tool you start to use the tool immediately and that can create a situation where you you can't continue working on the tool itself that's how important you need to get finished products for the users especially if they have issues and stuff like that so the development of the the build system has been quite of like this kind of a only when it's been forced by outside requests but if there's nothing else then hopefully this was described but I would say like what to take from this talk is that you can use the same tools as we use we find them somewhat good in practice but I would recommend checking some of these problems that I try to outline here like if you are going to build your own system like on top of SnakeMeg or something like that would be something that I would probably do now if I would start from scratch if you're going to be starting from there I would highly recommend looking at these same problems and then well trying to trying to figure out how to how to fix them with your system as well or however you want to do them because these are something that we encounter along the way and they were the main problems that we tried to solve with this system I'll put a link to the presentation I'll push it to the push it to the web page I'll put a link in the chat Simo Simo, can I highlight also the question of the compiler with different compilers like for instance, not only GCC but also the Intel how would you recommend them Okay, I'll actually okay, so in the configuration what we currently have so if you look at here for example this dev branch so here we have specified compilers and we have also specified Intel compiler so Intel parallel studio so if you want to build with Intel I wonder if there's the current iteration if there's any software built with Intel nope but we can build let's for the sake of argument let's build the same same Amux but with Intel so we copy paste this basically the same thing but here we add there's few of these like internal like groups here so there's variants for specifying different variants different dependencies let's put into dependencies here Intel parallel studio what was the version that we had so let's put it here so now we're adding this one line over here so we want to build the Intel package studio and then I'll push it so let's see I'll have to take the other window but yeah so basically it's the same kind of stuff if you want to build it with the Intel compiler you just specify that build with Intel and that's about it you specify over here that at the start of the configuration you specify so like normal packages there like software that are somewhere in the development like the dag of all of the dependencies like three but compilers are special in that case that they can build stuff so we have to first install the compilers so in our case we install compilers using the system compiler and then we use those to install the rest of the stuff so you have to specify them before you run any other packages but the Intel compiler is similar to the other one there's one extra step with the Intel compiler is that we need a license there and we have this license repo that we have I skipped some of the parts here it basically copies these licenses from our internal repository so this is nothing GitHub so that the licenses get to the right place so there's some of these like extra steps with the Intel compilers but it's possible but you have to do a little bit of scripting on the other side to fix it any other questions there was also a question on the hack and deal about the software updates you mean like when software is updated yeah it's kind of best practices for the updates yeah so yeah so is there a way to update all yeah so basically what you can do because we only specify the end products that we have or we need so there's no really an option to update all because we need to always specify what is the version that we have of course there could be some version that we can try to ask from SPAC like what versions do you know about but in a sense how we have done it currently is that we basically we update the SPAC and then if the SPAC comes with the newer version that we want we install it but usually the problem is the other way around the SPAC comes with newer versions that we don't really care about so like nobody cares if the auto tools might actually need newer version but like I don't know like nobody cares if C make is 3.20.4 or 3.20.5 like there's some minor changes there and maybe in a really special case you actually need to change but most of the time you end up in a situation where you get like some newer software that you don't actually need like you want to keep the base layer as stable as possible and the bigger problem that we are trying to fix is that you get these software avalanches where like something updates in the bottom and it propagates through the whole three so usually we just okay I might have messed up the configuration yeah it might have been Intel I can't remember but just specify the Intel combo this may be true for the for the things you build with SPAC that's right but the cool thing about this build system is that it can also build anaconda environments and there you're dealing with user-facing Python packages and actually I had that in mind when I asked a question specifically of course the neuroimaging environment so things like numpy and scipy things you generally want them at the latest version and that's a different case so because anaconda is like it's even it's even worse than SPAC when it comes to like the FNS in that sense that like it will always like you will never get the same environment basically unless you force the same build versions so what we do with the anaconda is basically that we have a version we have some list of packages here huge list of packages that we want in the environment and then we say to the bit so basically this is in general logic for the anaconda build but you specify a list of packages that you want for the builder and then you build the environment and after that it will only build like differences so it will always keep the previously installed versions the same but it will only build on top of it like new packages so it doesn't like it doesn't update like something in the background and in there we work in this kind of well well like version kind of way so we have a previous version usually so here's our first of the year version 2021-01-02 and some at some point we freeze the version so we say that okay don't update this version anymore and then we create the same version but with the same like start with all of the rest like start from scratch and build from like the newest versions and then we iterate on top of that like new requests and then once the new request well when we want to update the whole damn thing we again update the whole damn thing because with Anaconda it's even worse like when you install a new package if you forgot to specify like some channel or something it can like start swapping like from CondaForge and defaults it will start like swapping packages like NumPy previously came from there and now it comes from there and it's a whole mess so that's why we want to keep it like we install everything at the same time and then we install few packages on top of it and then we install everything at the same time but it's a different story I can give another talk at Anaconda Builder and problems with Anaconda environments in general later on but basically yeah the Anaconda Builder is similar to the other builder so it has this build config with its internal structure there was a good question here how other FTCS sites could participate right now so I'll probably have to hang around in the Slack channel a bit more but hopefully before the summer I can get the CVMFS running again so like in the month or two but once we get it running it would be nice if you want to test the software and see if it works for you we will first create a dev branch or something like that your users don't have to or we create a branch there and you don't have to activate it via model app to your users yet so if you want to test it out that's one way and if you feel like you want to set up a similar kind of builder for yourself then ask us once we get the Ansible role done that is probably going to take a few months once we get the Ansible role done then you could also test that out to check whether you can create a build system of your own and whether it feels like that that would work sorry it's complicated we have this in use and we constantly iterate on it but it's sometimes with these projects it's really hard to open them up not because we want to keep it closed but because you end up staring it out so long that you forget to write it in a way that other people can look at it as well but if you feel like building it with the current setup feel free or if you just want to look at the build rules that's one way of working on it but once we get the CVMFs rolling again you can test the build like the build software and once we get the Ansible role running you can test your own build system probably easier than currently it is just going overtime but any other question if not then I'll give the floor I'll put the link to the presentation if you want to look at it