 So thanks for the introduction. Just for first note, I have a few extra slides in this presentation. I don't know how much you are acquainted with the infrastructure, so I might skip over those if you are already familiar. Otherwise, I might want to say a bit more so that we understand what we are talking about. So if you think everything is already known, please just say so. But I see a few new faces, so I'm not sure if I should really shoot. So first of all, what is it? One, a build is a central auto-building database we have in Debian. It keeps the track of the state of packages. It merges new packages of FTP masters in every 50 minutes, except during de-install. It schedules packages for being built. Can't take your code bin and view requests, and so on. And it has a web interface that you might have already run across. Then we have the buildDemon itself. That's a demon that runs on the auto-build machine itself. It connects to one build. It starts building packages. And let's say similar to the ones we already have in the Debian Archive, more on that later. And why do we need it? Because that's the only way we can make sure our packages are built in a timely manner on all architectures. And if we have a delay like half a day, that's just to be expected, because sometimes builders are busy with things like, oh yes, two different G2C versions, such as we're uploading a plus one open-office org, plus one security update of the kernel. And that may have some, may waste some time. And we have a shared responsibility for successful building. It's both with maintenance and portals. And if it fails to build, the usual way is find out why. There are portals changes. Uploads those packages to unstable and be happy. So now that's a lifecycle of what happens. A source package gets uploaded to FTP master. Then it's sooner or later accepted into the Archive. Then when a build gets pushed, it's index files. And index files are just the same that everybody has. So it's a packages file and it's a sources file. Then it merges them. It sees, OK, here's a package and a source file that's not been built yet on that architecture. It marks the package as needs build. Then it picks the package up. Then it gets eventually rebuilt. Then it's marked, so it's uploaded. And at the end, it's installed at FTP master. And then when a build says, oh, yes, we found out now it's installed. And then it's put to a state of installed. And then, of course, again, the next source version package is uploaded some days later. And it all begins again. We might have a few problems there. So in some cases, the auto builder cannot set up the changes properly, in which case the package is just given back. If it fails to build, the package is marked as failed, which means it can't be auto-built. Sometimes packages, this appearance, the appearance, and needs build list, that might be because of a given back above or because of automatic build dependency checking. And there are some interesting things going on. And a full build log is always available on the build team on org website, except for security builds. So if you have an issue or don't know what went wrong, please check there. And there should be a full build available for your package. So where do we build? We used to do that mostly in clone changes via LVM. Now we use TAR. We do auto-addition of experimental backpots, whatever sources lines if necessary by a script, which help us quite a lot. And the changes always contains the main and cons of binary packages, but doesn't contain the non-fee binary packages, independent of if we build the main concept or non-free. I mean, that's a very important thing to know because we got a lot of complaints about why does this non-free dependency not work because we don't have non-free binary dependencies working. We do auto-sign it. I mean, that's just working now these days. We have package as arch-specific basically saying we don't want to build a package. On this architectures, we now, we see stays mostly parcel source package description, but we still have the manual list and we still use it. So a bit more about our infrastructure. I already said about the website. We have WannaBuild, which is on Greek.pn.org where every developer could access it. And it's in a Git archive where everybody could access it. We have BuildD and SBuild. As I said before, we have a few differences to the packages in Debian. They are stored in a common Git repository. However, we live on the branch BuildD 064. And we currently have 18 specific patches relative to the Debian package. Mostly small things like, oh, yes, the FTP masses location for this and that just has been changed. Sometimes a bit larger ones because our most important goal is the Sting's Runs 7.24. So what did we do for VC? We update up some version, 0.64 as basis for our own version and had some need to do some adjustments there. Then we noticed that LVM snapshots are now broken in VC. This is a bug report. This is a not Debian specific feature. If you search around in the Archive or in the bug report, you will see that there are reports from Fedora, Redhead, SUSE, and so on. So just as an interesting way, LVM snapshots sometimes are broken. Broken just means that somehow the mixture between LVM and UDF makes it impossible for the LVM comments to run. Which means if you start them, they will just freeze. And the only way to escape that is the booting your system. This didn't make us really happy. So depending on the architecture, EA64 is really good because it's fast in detecting such things. We need less than one day to reproduce it on EA64 in every case. Other architectures needed days for it. So even for other fast ones like AMD64 doesn't get as good broken as EA64. But it eventually will also get broken. Then we'd write PTRFS, and it's other ways broken. It's on EA64, it even doesn't survive one build. Whereas LVM at least usually does something like 3 to 5 builds. So now we switched to TAR-based change rules. And we adjusted a few extended four options. I mean, if you love the data, don't use those. But if you don't, they are very good. Because it more or less converts extended four file system to a tempFS file system, which is good for build speed. And then we noticed that on Spark, we can't run the stable kernels because the stable kernels will just crash the system in another interesting way. And we need to run old stable kernels. So our current Spark builds is run with old stable kernels, which does not only make a DSA node happy, but also not us. So this was a fast, so built-in, and the reason to prove things. I mean, this is above, so that's more or less a discussion place. I just wanted to give some common points to start. Now, discussions, suggestions, whatever. Above is your place. I'll start. I'll start the ball rolling with multi-arch on buildDs. What do you mean with multi-arch buildDs? Being able to say first that you have a build dependency, say you're building on AMD64, but you need an i3-6 package to build. Is there any chance we could get that? Yes, of course. There's a quite easy chance. Send working patches. Yeah. I mean, just on the principle kind of thing. I mean, I see a few other built-in people here in the room as well, so please feel free to speak about that. But my gut feeling is that I don't yet see how we could integrate that in a way that doesn't make it overly complex. This doesn't mean I'm opposed to it, but I would like to see a concept in patches first. Yeah, we've been pretty leery about turning that on, Nibbuntu, even though we are AMD64 systems by default for users use i3-6, which I think Debian should as well. But we've been pretty nervous about turning that on buildDs just because we didn't really want to run into bugs with packages accidentally being chosen from the wrong architecture or something. And if a dev package is installable and you're native architecture but not in the foreign architecture, sorry, vice versa, if it's uninstallable in your native architecture but installable in the foreign architecture, then it's conceivable that apt could install the foreign one instead, which would be quite scary. So I think I'd like to see this somehow turned on only for selected packages, Nibbuntu. Yes, that's what I was also just thinking, that if we say we need this kind of package, we have an, let's say, an adjusted packages list and adjusted app preferences, which just allows the package we actually need. And so basically, even already on a one-up build level, we have a sensor file that says, for this package, it's OK to coincide exactly these packages, or they would just say, in case of explicit dependencies, it's OK and other not. But I fail a bit to see the basic code will be written to support certain ways it doesn't break on every occasion. Yeah, it sort of strikes me that we end up with something like packages are specific or something. I don't want to ask it to automate it a bit more because package are specific really is aging other ways. I don't mean that in particular, but I mean something that allows you to do package specific configuration for multi-arch. Yes, and the configuration actually should be in the source file in the DSC. So I'm very interested in this as well. I want to build the cross-compilers that way, and it all works if you just do it on a machine. So the question is, how do we enable that in buildys? As you say, a specific, we could be quite specific about a fairly small set of packages we do it for. We could have a special buildy with this enabled. Do we have a preference? No, no, it's not about special buildys. We could do something like access auto builds that we use for non-free, but we could have access multi-arch. And with technical, it's just great new change routes on every time. So I don't actually, I don't mind what happens in a change route as long as the build set is going to happen. It's working because we saw a very change route afterwards anyways. I think we have still two machines left without with no clone change routes, and they just won't do it. And it's not an issue. I mean, those machines also don't build non-free. They don't build experimental. Yeah, so something of a change to something we are now really good with. I guess, because for me, I build depend on GCC multi-lib. So I'm building on any architecture for 32-bit packages. And I can declare my dependencies, which work in I386 route, but they don't when somebody accidentally builds on AMD64 without. Perhaps some interesting note on that, just a second. What you should be aware, I said in the last year, we mostly switched to, we have now lots of buildies, which build more than one architecture. So we have, for example, an EC86 and AMD64 change route on the same piece of hardware, which is quite helpful. But my son has a couple of other things because I think we have only one EC86 build and AMD64 kernel. Yeah, I mean, multi-lib is, the whole bi-arch system is okay for some cases, but it's really not, it's really not terribly scalable. So it's particularly not the kind of things Wookie is doing. Yeah, I think this is a dog-fooding issue. We're expecting people to build stuff this way for various purposes where it's necessary or sensible. I think we should try and use it in our infrastructure as well. So I guess maybe a few of us need to sit down and thrash this out. Yeah, that would be a great thing, actually. I think the answer to the question is yes in principle, but we need to work out how we're gonna do without breaking things. And one really important piece in buildies is really, say, run most of the time, not being actively watched. Whatever they do, they just take the final package, sign it, and ship it into the FTP master. So we need to be pretty sure what we said, whatever we do, I mean, it shouldn't break anything, but if it breaks, the package must not be installed. So the result needs to be a failure, but if we pile up two days of failure, build failures, we will have a very hard time by our users. So build design, really, and something, you shouldn't mess this unnecessarily. This is always a bit, yeah. It's not a visible part of our infrastructure, but users can start very fast complaining if there's no AMD 64 Auto Builders, for example. So I think that's another one by Colin. I think we have two microphones here anyway, so. Sorry, we're chucking them about a bit. The other kind of multi-art related thing is, and I guess this isn't really a question, but a request for help, the things like cool and any annotations on build dependencies, I've sent a stack of patches a while back to get some of that working on in sBuild and in, I think, to some extent on the buildee branch of sBuild, but I'm not desperately sure about exactly which configuration is being used on build machines, so it's slightly hard to work out what I need to back port. Could I work with somebody here? Sure, let's do it afterwards. It has access to sort that out. Yes, let's do that afterwards, but as a general, like Mark, we run the configuration from the buildee064 branch and we sometimes cherry pick branches from the master branch, but if you think something is relevant for us, we just apply it in our branch and so all the pages from us are cherry picked to the master branch. Right, I think there is also an issue about having a suitable version of Deepakage Dev. I think we have some, the buildee branch has its own fork of bits of the Deepakage modules, I think, from the Deepakage Perl. I don't think so. Or at least used to. Actually, in the buildee branch, or in the schizopository, there is another version of WannaBuild included, which we don't use, so I'm not sure which tools are there, but I never cared. I wasn't thinking of that, I was thinking of, there are a few things that sBuild uses for parsing dependencies. We have gotten rid of that. Right, you have, but the thing that we'd need to check is whether Wheezy's Deepakage is good enough and whether we might need to reintroduce those, or reintroduce those forks in order to be able to parse colonnay and colonnet of build dependencies. It's a current situation of parsing build dependencies as follows for the non-overlay distributions, that means, for example, unstable. We use Upt to do that. For the overlay distributions, for example, experimental, we make an unfiltered package, which contains just everything from the depends line, which is relevant for that architecture. Then we just install the package via Deepakage, and then we say to aptitude and now solve the mess, please. Sure, sure. Okay, well, we can have a look and see if we can. We can, yeah. Just one thing. I was reading over the bug report about the LVM snapshot issue. And this Monday, somebody sent an email that he backboarded LVM from unstable and that that fixed the issue for him. So that might actually be good news. I don't know if it's, we probably need to investigate in detail, but it looks good. Well, now I would say for the ability, I think we have a working solution. However, as a dependent developer, I'm very, very unhappy with, personally. But anyways, as a development developer, I think if we have found a fix for that issue, we need to make sure it's available and stable. So that's another question. That's what I'm saying, yeah. I'm pretty sure the answer to Colin's question about any native is that the Weezy version of Deepakage does have what we need. I think it's right. I think it's clear to go. Okay. Okay, I just think we should check. And what we usually do when we start breaking code, is that we take one built-in just to stick it to experimental and break experimental trust. Which is not as bad as breaking unstable. That's fine, except when you're doing things that depend on the differences between the resolvers that you're using for unstable and experimental. So I'm not convinced that building for experimental would actually be a good, a correct test of this. Because of what you just described. Yeah, well, I have a few ideas how we could even size that in experimental. Or we could just run things through unstable buildings but throw away the results or something. Yeah, or similar things. So, I mean, we have different levels of testing depending on what we change. There's not one browser that's telling this is a way to accept new changes in. Some changes are actually the obvious and we just throw them out everywhere and we haven't had to make a mistake too often with that. Yeah. We have one other parser of this in Adoles tab check to do the checking that it's installable before it goes to the buildings. I don't know if that supports it or not. Yeah. Which is now superseded by libdos. But yeah, that's all being, we're using that a lot for analysis but I don't quite know how it works with. Yes. I could just explain how the checking of the solability works. What we do is we generate a fake packages file and then say just what is this package files installable. As we speak about source packages, it contains packages starting with source minus, minus, minus package name or with depth minus, minus, minus package name to check for the manual set dependencies. But yes, in the end it's just a package, a file that looks like a normal Debian packages file and where we want to have the size what of them is installable and what not. And also we have a small patch in. We assume that build essential is always installable so we have a packages file with build essential without any dependencies because otherwise if you re-upload parts of that the build is just like, oh yeah, now nothing's installable anymore. I will quit working but usually build essential is already installed as a change route so it wouldn't make sense. I've got something completely different on the wanna build side of things. Now we've got build using and one thing I'd like to see but it means probably changes to policy is automatic bin and name use on packages that declare build using when they're built used stuff changes. I've maintained the Windows cross compiler stuff so I've got bin utils for MinGW, GCC for MinGW and so on and they are built using whichever version of bin utils was in at the time they got built and GCC and so on and something that would be very nice for me is whenever bin utils changed or GCC changed my packages got rebuilt as well. I have two different answers on that. So one is I'm a bit unhappy because that would mean that we get a, I expect that we'll soon get a very high increase this of usage where actually nobody cares why that is the case because forever maintenance just a little change. So we need to have some control over what happens there otherwise it's not only the build is getting unhappy but the other maintenance get unhappy as well because we are just using too much power for such the builds. The other part of the answer is actually the question of which binary package needs to be the build in order to get the archive into a working state is a topic which is handled mostly by the release team these days. So yeah, I think, so basically what I think is if you have good reasons to do that or not I would try to start a technical discussion and send pointers to the debut release and the wanna build team mailing list but what pointers was explicitly in that sentence because please don't include the list on the discussion. But yeah, I'm not sure what the outcome will be. I mean technically it would be probably possible and perhaps even less hard than the previous subject but I'm not convinced it's a good idea. I'd like to see some improved documentation. I'd like to see some improved documentation on how to set up the building. So there are these two configuration files and for one it's a mess with the email addresses when you put in the email address with a slash before the ad or not changes in every configuration line. I couldn't figure out how to run two build these or two builds in parallel. Not possible. And the, okay, well, it says somehow in the config. Yes, it was never working and then we removed it. And then the global timeouts. There's no example in the package how to set global timeouts. Okay. And that used to be possible when I did this four years ago. I can have a few different answers on that. Yes, so I mean for a few things there might be let's say documentation improvements possible. However, I mean, we're now really speaking about build the SQL test to different branches. And for the branch we maintain, I would say, our goal is to document it in a way that is possible to set it up in the same way as it is set up on a Debian system. But so what we have done, what we tried in the last years multiple times was to move configuration from the build itself to the to wanna build because it's easier to configure that for all systems. So for example, in the recent changes, these days you can run S build with auto end dot S build RC configuration file which wasn't possible up to now because there's nothing you need to set anymore. We just moved everything from dot S build to dot build the RC and from there we could move it to wanna build and then be done. Oh, you should add to dot build the RC you could. Yeah, you're right. We should probably have some documentation what to get and let's say similar aside as I said before I'm interested in providing a documentation that allows the maintainers to understand how it works and to get similar results at home. If you need my GSoc student last year wrote a documentation on the wiki it is not perfect and you probably already all know that but it is a tutorial from beginner to having a first build the running. And by the way, what you just said is. Yeah, I know that, but some other people might be interested. Things have changed since the last one build these four years ago and I had them working and now lots of things have changed. Yes, well one really I think about another important piece of information is any development developer could speak with wanna build system so wanna build is no longer restricted to a certain group you just can't update it. So if you want to see what a build gets from wanna build you can just ask wanna build the same way a build does. Just a build you will refuse to make any rights to it but you can you can see it and take a package the first part of taking a package is tell me which packages are available and you could do that and wanna build has a minus, minus simulate flag if you set that you can even make a take on it and it will just give you the same output as a build did get but it doesn't change the database internally. So you could do that all the time if you want. This is also a new feature in wanna build I like very much and it helps debugging issues because you can just run a build deal with minus, minus simulate and sort of ways out come and nothing will be broken anyways. I'm sorry I'm a bit late. Maybe this question has answered already but I've noticed there seems to be a certain amount of pain with setting up new change routes when things need to get done and when requesting such it seems to take a long time until it actually happens and I wonder if there's anything that can be done to reduce this pain on all involved sites. Yes, now let's see where we are. I think I need to slide. So yes, we have a very good working script to set up change routes with LVM. However, we don't do it anymore because LVM is broken. So just about that just that seems to be in fix for that bug if so we should of course upload it to a stable or we should first test this and then upload it to stable as always and the other is our scripts might need some adjustment for the tar based changes we need to use these days. So basically what we just did when we noticed the bug we did in some emergency kind of operations the convert enough change to tar based so our build is still work but because we don't have a nice script to set that up. So if anybody wants to fix the script and the script is in our build the branch and it is in I think ETC is set up change to the age or so then I would really like to see that. And that would be really useful for us as well. Actually can I just stick this in now? So we have a general problem though it's hard to set up buildies and there's actually two parts to this. There's setting it up like Debian does which is kind of relatively well dealt with but there's a whole lot of other cases where lots of people either being derived distros or people who just want to rebuild their packages over and over again for continuous integration all of that's too hard and we need to do there's quite a lot of tools there's rebuildee there's the new PI bit thing there's the other buildee we don't use that Roger does and we start if we fix the initializing the database problem is that okay so work is ongoing on that is that where we're at? Well actually I would really be happy if somebody would interact with a buildee or maintainers list or there's a mailing list about the wanna build admins, wanna build team list if somebody would be on that list who has an interest in making the build is documented for others and submitting patches on that effect I would really be happy to accept some but the amount of patches on that topic that I could write in my own time is a bit limited but I think now your question Ronda was about why the buildee maintenance takes so long is it takes so long because we have lost our automation recently which is actually really bad news for all of us involved it used to be better and on the other hand we shouldn't add new build basically we should just add new change to the ones in the cycle that is directly after stable release and after the words just upgrade but all of these things are currently not in the same state so I would be happy to there's a few other options but not necessarily appealing and there's no code for any of them this is currently a sad part of it. I'm Helmut and I would like to discuss the issue of debuggability of failing builds as one of the maintenance of the doxygen package I'm experiencing a few hyzen bugs which means the build is failing the easiest solution is to just give it back and it works. So that makes debugging the build almost impossible debugging re-assigned bugs almost impossible and I was thinking about some options to fix that one nice thing to have would be obtaining core dumps from buildee's so I could investigate the situation after the fact that it failed while discussing this with a few other people a few other ideas came up so if a build fails during configure it might make sense to save the config log and if a build fails with an internal compiler from GCC it usually tries to build the file twice and if the failure is reproducible it saves the pre-processed C source and a temp file location to investigate further so I would like to ask the question what kinds of artifacts can we save from failing builds to make it easier to debug them and how can we achieve that? Yeah, well now I will just start with another part of the answer we used to have a situation where we just deleted directly successful builds and we kept cells of failed builds around but that meant that after a few days we run out of space and we just deleted them anyways in addition to that I said it's a bit unfortunate but most maintainers don't care enough so I see I'm happy to see somebody caring about it but that's why we are at the current situation so my question is really I mean I'm happy to just share everything which is the build directories with the appropriate maintainers but my question is just how to make it work in a way that the maintainers can easily access the data and that this don't get too full? So the main problem why you run out of space is because you use throw away shoots and you need to throw away the entire shoot which takes a lot more space than if you use non-throw away shoots and you just need to keep the build directly. No, that's... Actually, in my experience that's true. Well, I think it is. I mean, basically if you use a throw away shoot in an LVM snapshot, you use five gigs per build, period. We are deleting the build directories so basically the build is unpacked in slash build and we just, and we remove the directories in that one and I think if we keep the build or if we would keep the artifacts in the build directories that would be helpful for your case but we have seen that we run out of in that directory as well because if we have a few large packages being failed and so somehow something we need to do like you can recover the artifacts, something like three days after the builds and afterwards say a gun, that would be an option and then think of how to reduce it but yeah, something like that we need to think about how we can do it in a way that works and how we can easily get some. I mean, when you get a compiler internal error the pre-process, right? The artifact in question is not really large because we don't actually want the whole build tree with all the artifacts. We just want the pre-processed source code which was generated by GCC when it got the compiler error. That can be like five or seven megabytes big but it's just one file. If we have a good way to find it out and it prints it, it prints where the file is. Patch will be accepted. Is this S-Build patches? Is S-Build that has the functionality to keep it in components? It would need to be in S-Build, so yeah. So the suggestion to just say if the build directory doesn't work for me because currently we don't save core dumps at all. So it need enabling core dumping on build these. It's not a common problem but it happens occasionally and maybe saving the build directory for one week or like that for maintainers to spot the errors. So they can just fetch it in that time and if they fail to do that they'll have to rebuild or like that. So these bugs keep popping up over and over again and just catching a few of them would be great for me. I really think we should discuss, we have all seen the 10 minutes slide three minutes ago so I think we should discuss about what we could do afterwards on some occasions but yes it definitely sounds sensible to me to do something there. I just don't know what the right answer will be. I've got a question. Do you have any plan to make Debian port official? For now it's quite hard to get a new architecture into this service and Aurelian doesn't have too much time anymore. Maybe it's not for you the question. The very easy answer was I can't remember that we ended the use to put on, to put any architecture into our wanna build in the last eight years. That means basically since we have this new setup with these databases which was done quite a few years ago we don't have these large issues but I think Phil wants to give an answer. So it's better. Is that really a problem? I mean if you need a new architecture and the wanna build on Debian ports you can also ping me. I mean that's what I did. So if I did database changes on the main wanna build I replicated them. We've got a bit of a, we currently only run architectures but I'm the main archive on the main machine also. I mean we need a lot of time of those 15 minutes currently to do all the processing. Yeah well if there are multiple main, or there ought to be multiple maintenance of Debian ports. Yeah but I had the feeling that only Aurelian was taking care of that. I wasn't aware you were. Also involved in that. So I contacted Aurelian and he told me no the build port, the Debian port is already overloaded and we cannot add a new architecture inside. That's another problem because we only have a VM of DSA that's also like underpowered. I think we need to speak about that with DSA anyway. Yeah so another when you because the way it's done it's not main Debian, right? So currently I mean that's true for both the Debian wanna build and the Debian ports wanna build. Both are valid CPU intense. So the wanna build machine, even if I said it isn't as bad as it used to be with Berkeley databases which are just easy to break with lead operations. We still use to have a very high average load because the other 15 minutes we compare all the files but test data makes it all things. So from the 15 minutes of FTP master push to FTP master push, I think we currently use 10 of the 15 minutes to run scripts, eight ways parallel. Yeah so I need to start a new port right now I just asked a really N and he said you need to do these things and that seems to be going okay but we have to have a good mechanism for introducing new ports and maybe so either we need to give more resources to ports.net or we should bring it more in-house and maybe that's easier to do now with David. I must admit to not understanding how all this works but I guess it's probably more productive to talk in a smaller thing about exactly what we should do. I think we need to think a bit more about that how we could make better use or how we could share the sources better or whatever is sensible but I think we all agree that it needs to be easy to start a new port but whatever that means in the end in the result needs a bit more discussion and now we have five minutes so that means one to two questions are left if any. Okay we managed to shut everyone up. I saw Holger pass by one of the things I would like to ask is would it be conceivable to find a way to run things like view parts on idle build these because we don't have test machines for the non current non for some architectures. I don't think I would be happy with that. Basically I mean what I expect is that we get a personal archive soon by a FTP master and then I assume we won't have any free time. So currently our goal is to say we build packages and unstable quite fast and we eventually build experimental but we have enough architectures which only have a limited amount of spare time and I'm also not sure how the pure part setup is compatible with the other setup. I mean if I'm sure that's a pure part spot is not running too long but from a gut feeling I'm not really sure I'm happy but perhaps it might change if I get shown that it works very well. Next and last or don't we have a last question. Okay in that case I would just say thank you very much all for joining for your question for this discussion and I think there are a few more things we need to discuss on the remaining depth of days. Thank you.