 Hello ladies and gentlemen, I'm not actually here to introduce the talk it is just past 12 on the last Friday of July and This is the day that certainly in in the community We try and remember people who've been forgotten about for the other 364 days of the year as it is happy sis admin day. I remember I Remember when I I finally managed to get away from sis admin servers and Nagios stopped sending me text messages And it was sort of like I missing an old friend or a small child who insists on waking you up in four in the morning So these this is the time we remember those people who in install those printers and run the network cables and get Woken up because a disk has exploded or change your password because you've forgotten it yet again Perhaps not with a smile, but possibly with some good humor and and so I do see a few sis admins around the place And a huge thank you to everyone who does this on a day-to-day basis and keeps our systems running But I would like to say a special thank you to DSA and the debcom sis admins. Could you stand up and give us a wave in fact? No come here come up on stage We do have a little something for you to say Thank you to do for all the work you put in not only at debcom but throughout the year to keep Debbie and and everyone else running Anyone in DS at yes parvoid up you come to so thank you to all these people here. Thank you very much So to go back to the schedule and the part and build want to build minutes for my talk Now we start with This one appeal so what is wanna build actually it's a sense of auto-building database It's a whole system consisting of Scripts websites and so on it keeps tracks of the state of packages where state here means not Which sweet which package is also not so primary information The primary information is which packages are built or needs to be built It merges new package uploads from FTP master roughly ever 15 minutes It's schedules and packages for being built it can the court been an emu requests where we could say okay Please just rebuilds this package in a fresh and untainted unstable environment or testing environment or stable environment Then we have a web page build the Debbie and ork as usual Then says that this one is a wanna build one senses build the set is just a demon that runs on to each physical machine of auto builders It connects to wanna build currently via SSH And it starts and if there's a package to be built it starts as built which is actually just building a package on saying yeah I build it successful it failed or something was just broken And then what I built the duster light action and actually say a very similar but not exactly the same to the one We have currently in the main there been archive But it has been far worse than what we currently have So why do we have auto-builders at all? What is their purpose? So the purpose is to build package and all architectures that we have business is not time frame There are always a bit of small delays, but they are not an issue So don't be surprised if a package is uploaded but not directly built within a couple of hours one or two or three days of Delay are not an issue. I mean normal testing migration kicks in after ten days So if a package is uploaded two days after upload to unstable, that's not a problem anymore and They exist to avoid that said someone needs to provide packages by hand I can remember well when people from the security team didn't had auto builder support and we're not amused at all by the many Architectures they have had to provide packages for so that's one of the great reasons why we need auto builders in Debian for our work And basically we as build the maintainers we make sure that the buildings work That's our parades that have working change routes But we don't have the responsibility to make sure that all packages work successfully That's basically the task of the individual maintainers and of course the porters needs to support the maintainers But it's not supported tasks of the auto builder maintainers to make sure everything happens and if it fails to build There are border change routes where you could write So you can try to fix it or debug it to get help from from the porters and up and then if it works Please don't don't just upload some crappy binary from there, but upload a fixed package to unstable Yeah, so what's the package life cycle packages usually starts as a source package plus I'm compiling by now This is uploaded to FTP master Then it's accepted in the archive often at least Then wanna build mark the package as it needs to be built Then I build these pick the package up Then the package is hopefully built after some time that it's marked as built and it's uploaded and marked as uploaded Then it's the binary package is usually installed on FTP master And then the mark is installed in wanna build and the basic is a life cycle from the wanna builds point of view is already done Then that might be an extra's upload and so on and so on So that of course sometimes problems Sometimes the auto builder just fails to set up to change the property for example up to get install Gives interesting errors or can't connect to a mirror to give them usual example Then the package just is given back that means it up it reappears in the need to be built packages list again So the other issue is the auto builders could start to build but the build fails Intermediately because of an error in the package of an internal compiler error or so on then this marked as failed in the wanna build database And sometimes you see a package just disappears and reappears in the list of packages needs to be built That's usually happens if a package is given back more than once because for example It could have virtual dependencies or some Dependent on package fails to install properly in the change route. That's a very nice thing to do The full build log is or is usually with this version formation of all packages available on the buildy dip in org website So if it fails to build much in a change route So one of the first things you could do is just to compare do you have the same versions locally installed in your change route? Then on the build demons because that could be a reason for failing on other architectures But you even don't need a portal change route to fix that or to debug them So the build environment is these days are mostly clone change routes Which means we have a clean change route the clone it do something in the clone install packages to whatever and just throw it away Afterwards the word mostly is there for a reason on some architecture or on some build demons That's not architecture of dependent But we still use the old style of having one change so that we they move packages afterwards also on the KBSD ones We don't have LVM obviously But we could for example move there one day to say as set FS We could also use on Linux PTR FS Which I'm looking forward to do at some point in time Which we'll just give out another number of number of funny Bucks in the in the PTR FS file system as we usually find interesting bugs with the build demon setup We have also these days usually automatic Experimental backpots it is a change routes because if you use a clone change of unstable We could just add more sources lines to it install package from experimental installed away. Nothing is lost. That's a The change route contains the binary packages from main and concipes these days But and that's even through with packages are built for non-free So non-free packages cannot build with non-free binaries But only with binaries from main in concept and we can't change that because currently FTP master master dozen check said non-free packages are allowed as build dependencies for other than the packages So we have now as I said start in our new section We have now since some times that's build demons automatically sign packages which are built at least the pack as a build demons Or most of the build demons that are Administrated by Debian system administrators. There are a few build demons where it has not been added yet So are a few build demons which are not run by DSA, which of course also don't do out to sign packages And don't do auto signing means usually you have to wait for half of a day till a human comes around and signs the packages Which of course impose a few delays And for auto signing says a cheaply cheeky specific for build demon Which is very short living and is rolled over and we had now had to first roll over and We learned a few things that we could do that different for the next time We found a few bucks in duck, but usually that's just a way if you do new things for the first time I mean Another news is we did queue the ordering in former times needs built was just a fixed queue order So first all all essential all required packages were built So and also and also and all important and all standard Then all all optional and all extra packages and there in a first all lips were built and so on and that was if we had a Small delay in the build demons The option packages are built really still built very fast So you upload an optional package and just build but the extra ones just start them don't build for 20 or 30 days Which was really a pain Now we changed it We saw we saw secure by points, which are dynamic and change over time So you get bonus points for having a high priority package for example an important package of course consider This way more points than an extra one But you also get points for just waiting to be built So if a package weights now for three days for being built even extra package waiting three days for build it has a higher priorities and an Option of one just being uploaded yet You also get malice points, of course if a package takes very long to be built it just gets malice points It gets malice points for being concept for being non-free Mark, why are you looking so well? So and of course the points can just be adjusted manually so we could say well we know this package It gets a bonus of ten points. Yes router. Sorry I Would repeat it if I would have understood it Whether you're also assigning malice points for like uploading three times in half an hour or something Well, if I know to spate that maintain a behavior I might manually as a adjust the points, but not automatically but not automatically not automatically but Says a bit bad way to deal that automatically and even when basically I mean if you don't have a backlog I don't really mind if we have a backlog they get the malice point because the package is Just freshly uploaded which means they don't get the bonus points that they are waiting for being built and So it usually works quite well I guess but by doing this change we increase overall throughput of our build team is by something like 20 to 30 percent Which isn't bad for just the ordering things And some architectures now is sad to be interesting. We have architectures with different buildings coincide a Different queue orderings This happens for the case as I just said packages which build quite long get malice But but what we want to avoid is that said all four of our four build team is Stalled by two to see uploads one she lives the upload and one open-shaded K upload That's why we do on these architectures have one build demon Which just always picks a very long building packages up and the others builds small ones up first So that we hope that we don't have a compile long building once in all a build demons at the same time So that's why if you look at the queue the queue isn't what you see as Q is not exactly the same as build demon See because there is not an unique view on the queue of the packages anymore and The other news is new packages can just enter the queue at any point So packages which enter the queue at the top their packages enter the queue in the middle and they want as I enter secure at the bottom and also packages Yes in packages do queue jumping sometimes due to the way we measure and I haven't had better ideas yet It might happen that packages overtake each other in the queue a few times till they eventually be being built That's not bad itself I mean the important thing is not to queue orders important thing is that that we build all packages in a timely fashion And if it doesn't matter which package exactly is first being built from the global point of view But so please don't be surprised if packages to queue jumping. That's just normal And it's not a point of order to me That happens because we Of the of the bonus points for being waiting for being built that handle a bit different with waiting times and if someone wants to have this nasty details I could tell about later, but just For general, it's not an issue. It just happens So other news that are not visible to users is we did a lot of code get up and see factoring We used to have an handwritten function for doing option passing with 200 lines of pearl Which now have been just a place by one get options clause. I Mean that's just good news We removed lots of that code. We had many implicit Assumptions in the code. For example. Oh a build is speaking with me If there's no terminal connected plus the name is starts with build the underscore Which of course wasn't true for my ground jobs asking things We have we have an email API for taking packages Which allows us to pass information more information from the from the one ability to the build demons For example, we now say on the one ability side. Oh, if it's experimental, you need to take this This code for solving build dependencies We consider build essential always as available There was a time when I'm in a deep package upload was has happened So just one a bit said oh deep package dev now had news a different deep package version Oh, I cannot build any packages anymore. Sorry Which is of course a very great thing to just shut down also build demon network We not avoid to see stays in saying we have All essential packages are available as packages without dependency. So the package definitely just always be installed Which is of course just anyway since a change of available. So we don't mind and We also moved settings that we used to have in the good distributed in the code in so the database We moved setting said or and or we did Yammel files, which is still better than Just having it split over all over the code. But I think other people you know, how Debian coach looked like We use for our arch of triggers now signal So if we get a trigger set at the install has happened We just touch a file and this file is later up picked from ground Which means we avoid to stall on any other other people and we could work with That we have triggers overlapping. So if we take longer than we need to signal to processes at rigor It's not an issue anymore. It used to be and More more more things or sell many small changes Cleanups happening and this actually good thing the code has been better, but it's still a maintenance nightmare There's more bits sales. Oh, I see no and in tech error It's in you if in your home, you could have a dot wanna build yammel which allows you to specify individual preferences So for example, I have output format options in my fire That's where I can just say oh, I want to have the output of wanna build in a format Which I can directly pass into a shell to give the packages back to for being built again and There are two directors and Greek where you could look for examples So one is an org wanna build it to see yammel and the other one is my home There's also an a character missing. Thanks to tech And so one of the data base is these days or quite for long time now available for all developing developers except the database for security builds, of course So the website as I said before There has been lots of improvements mostly by media of our new and working user interface We have a quick view of architecture status where we have problems where we have backlog So if you go on the first page of the build demons website, you can just exactly see Which architecture has which queue links in which suite which was information that wasn't available that way before Then which just helps us as well to see oh, yes, this is this queue. This is architecture has by a packages Which haven't been uploaded yet? so Yes, there's still a small things representation isn't optimal So not everything that looks at the first view as a problem is about is a real problem But at least we are from the building side could See which one is and which one isn't and it helps us to make things easier It also makes it easier for people to find out what is what is the status of their packages, which of course is important One of the questions that's open for me is really in my opinion Even though the website isn't really feature complete. There's nothing where I would say off the top of my head Oh, yes, we definitely need this and that to happen on that side So if you have things you would like to see please tell us about Then there is packages are specific, which is just a very old inherited file Which says this packages should be built on these architectures or this package should not be built on that architecture We started to pass architecture lines from the source packages instead So and don't do so much using the packages are specific file and also not for us setting in the vulnerable database We got rid of so what we so do is we move from manual work Which needed manual work by the build the admins by wanna build up means to think that you could just automatically read from the packages And source packages lists, which I think is a right place for information to be stored There's no need to duplicate information from that list to some other lists and and to require people to just do that So we avoid manual work. We can't get rid of packages are specific at this point in time But we don't need to invest as much time as we needed to in earlier So we have of course open issues with the building a network one of them is we need more cleanup of things of code of of things to happening and We need even more clean up. So that's not just joking. That's really very very very bad pearl code Which we inherited for quite a long time. It used to work with Berkeley DB files It's then was more or less Miss this large hammers put on a posca square database, which was an advantage because the pearl DB because the Berkeley DB files just were way too insane We have now issues with we are just producing very much Logging which ensures these a every other day Of the way we do locks in the databases We need to check set which means we were changed that which means means we need to do a little bit more of pearl cleanup of the conf Refactoring the code again to make smaller locks in the database We have had some unusual conditions, which are not particularly handled well, but which mostly work And of course as always we need way more test case and test suites Good news is that we have at least some test cases now, which we hadn't had before at all, but it's only It's only covering a very very tiny part of wanna build Not so real operation of wanna build but rather just the passing of packages files Of course documentation is I'm not using support non-existent, but yeah, well And I always some issues that some hardware is broken here. So I'm asking that doesn't Hasn't survived and system upgrade as a change of this broken That's just more or less everyday stuff, which will continue to exist until future if if they are large transition sometimes we have issues Oh season that's changed. It hasn't been updated yet to use a new pearl lip see Jesus see whatever version is not multi-arch compatible yet. Whatever says always a nice transition coming along in Debian Of course multi-arch will require some change at our side as well Currently we don't see large ones yet We have a few ideas which might need it to be changed But I'm sure that we will detect more when the time comes along And we need to unify configuration even more we want to move Configuration which is currently on each a build demon on its own to a sensor location where we only to configure at once And not on each machine as it is I personally would like to be able to do some build Post-processing before a build is signed and uploaded which means we could detect certain issues on the build demons instead of just On FTP master So that was it now what I could present here without going into too much into details any more questions Suggesting anything else Have you considered to give Malus points when a package fails to build from source? How do we what but was a word the third word? How do we? Sorry, how can you trust a pizza question? I missed the first part. Yes You have a point system to decide which package gets built first Yes, and you have bonus points and Malus points. Have you considered to have Malus points for package? which fails to build from source on another architecture I Haven't consigned at Malus point for a package which fails to build from source We have Malus point for packages which have not yet been built at all on that architecture That's a malus because a package is uncompiled and we consigned it in higher priority to first build the packages Which have already been built on that architecture before building new things which I think is the right decision I Don't think we should put Malus points for packages. I mean if I pack a package that's just constantly Fails to build from source. We might be I all I also was coinciding to know not constantly I Maintain in another architecture. Oh, yeah, for example if a package fails to build in for architectures, maybe you just don't bother to to try it in Architecture which is slow or yes. Okay. Yes. Yes that said we have consigned it basically what I have consigned it was to To if a package fails to build on too many architectures just demoted a bit on the other ones but that's a bit it's not so easy code wise and Calculating I don't want to spend too much time in calculate the priorities But eventually I think we will get along to that point to make that happening I just don't have the right database query in my mind to do it But perhaps it will happen sooner or later. I mean the idea is good. I just don't see how to implement it at the current point in time But yes, good point Well, actually the question was already answered But one point that came into mind is that we shouldn't really put much of effort into the points But rather to have faster build these so we don't have to care about the order the bills will happen fast anyway I Don't want to over stress the points But even on fast architectures sometimes they are important We have had spikes where every architecture was below a 98 percent of instability of Package current Ms. Mark, which means just a large transitioning is happening. So yes Well, what really annoyed me previously was that I was only available to produce big hammers So really things this bonus malus points is mostly done currently or purpose even it is fully done but it's still important because They will also always be times we're not not ever package can just instantly be built and it's good to have some values of biology which is Which is which has a bit more Adjustments than only saying yes or no, but just say okay. We have a bit more sane order Do you have any statistics on how the say average waiting time changed? Did it help to which feature of the point system that actually help more to reduce the waiting time that kind of thing? Well, it's I don't have a statistics, but I'm I have a quite clear opinion about which Things have most set was that experimental optional packages could overtake each other together with this effect Waiting time gives bonus hope because we have if for optional optional packages only we mostly have a first in first out The same is for experimental insectic use overlap a bit So that's the basic essential thing and we got rid of very Nasty things we have before before there was a long list of package of just different sections This is a wanna build code which that if you're in this section you get this many points That's extra get that many points It was really just incredible and now we got rid of most of this old code and just say well we just ordered by by points and by points by waiting by Time being spent in the state since a bit of time and next one would send me a name Which is enough which is fairly more simple than what you used to have before But yes that order by waiting time and over optional extra packages could overtake that is I think The course in which really helped much And it could really be seen that the queue length was just cutting down two weeks after that on some architectures You know I think the next one was was was your him and then Martin There's a question from ISE To ask about arch all order building. I think it means arch all been in a viewing. Are there any plans about that? Oh Show me code Yes, the uplands, but that's not the code available yet, which could do it in an appropriate fashion and Arch all is even a bit more complicated arch all packages, which only can build on certain architectures For example, if you need a gross compile if you if you if you arch all package contains Happen to contain bios for spark and MIPS and MIPSL you should build such an architecture Which have got compiled for this blood architectures are installed Okay, and the other part of the question was about throwaway binaries Will that happen soon that's not a vulnerable question Well vulnerable will have to build the arch all package and then that case There's a for extra. Okay. I See I think I think I could say now what we have discussed Which already was announced or was set up about from the FTP masters was that's the first step would be to slow Why they the binaries which are not arch all on the upload together with source? So from a one a bit point of view the only change is that we need to build more packages Especially on easy eight or six and aim the 64 where we have just to build them It's just to wait around and do nothing because they're just idling most of time In but all that assist that would help quite a lot and about architecture all pen by binaries We could think at a later point in time Related point. Do you have a plan for dealing with the packages that can that are architectural? But can only be built on certain architectures. There are a handful of these From from the building made from the one ability maintenance point of view. That's not a question because we don't build them But independently we discussed a bit to annotate in the in the source package or the package on which Architectures this package should be built. So that's the current discussion we had at mr. Black four days ago, but Just idea. It's not a plan yet Yeah, this has been discussed a couple of times recently and there was some question as to whether we should allow it per Architecture or binary package within a source package So you could imagine a source package declaring it has one arch all package builds on PPC one builds on spark one builds on MD64 and we think the Somebody was saying something about qm you to me at some point Okay, we worried about that one because we thought the one ability people might get upset if we asked them to deal with that But yes, there's certainly if anyone's got any code or proposals for that We'd be quite happy to look at it, but I really think if we do that Then we could easily just consider the package as as a binary package of the first architecture It could be built on and then we're just done from the one up inside But I think that won't happen within the next two months But it I think it eventually should be done In the released team talk there was this idea of having Customized experimental repository sort of personal package archive Do you think it will be a huge work on what I build side or are you ready to? Implement this you know a few hours. I don't have any idea if she's going to be a lot of work for you Point is that we have to work exactly like experiment Not like experimental maybe but because we would like in this case to pick up build dependencies in the personal package Are you okay? Well, actually I think some more work needs to be done on the FTP masters again Mark don't you just want to join me here? But from the one after so it's the most FTP must I would say and Yeah, I mean we're no when you're having it ready on FTP as for the question Rafael just asked I think the current plan would be not to set not automatic in the release files for the Custom experimental suites unlike experimental which is the point you were saying about automatically pulling in dependencies in which case I think that should just work From the one a build site. It's not so difficult. I mean we have two to these things to do It's the first thing is really wanna build can't cope with Set a set a package could be in this could have been the same sweet milk multiple times For the question that is do we need to set up an old one a build suite for each PPA? Which has some some I would say effects on web page and configuration which we can't do today So we need more more build them in auto configuration or should we rather just Say well we have it as one wanna build sweet in this bad case a database schema doesn't match any more pulley So there are few things for the one a build site for the build demon side. It's just a question of how do you pass a? Appropriate up sources lines within the build that's the first question and the second question is How do we make gets appropriate F keys or do we just auto install them because it's just one key Which is handled FTP master which would be preferred us But so yes, there are a couple of things nuts. So there's nothing I would say it's it's Technicated spoken totally hard or impossible, but there are things that needs patches and at least I currently don't see Who has time to do the patches properly and I definitely don't like patches just being half done and following on our feet We already have enough trouble with the maintenance of build the S build software and its head has Been a double to us for some times Upgrades to new major versions usually the wheel new major bucks Which isn't so good for us because we don't like to just shut down the one a bit network for two weeks for a software upgrade And hope that it works after that again So nothing is really hard and if someone has time I'm happy to provide hints and ideas and whatsoever But I don't see it happening from our side with the people who are currently working on it. I Think Martin asked last time before no, so Okay, yes, that's done. So any more questions on that Anything from IRC Okay, was it Good, then I would say we are done. Thank you for being here and yeah, let's have some more discussions afterwards. Thanks