 So I wanted to talk about Debian Auto Building because there were some changes since last Debcon, at least with relation to the maintenance. I'm glad that there are at least some people in here. Didn't expect that to happen actually. And people told me that they won't come because they need sleep. So, well, between last Debcon and now we had some changes in who maintains the wanna build infrastructure, which had a bit to do with communication of some people who've done it before. And thanks to the DPL and, well, the old maintainers, we've got new people involved to maintain it. So the current set of people who are actually in the group Luke, Dato, Mark Prokschmidt, Andreas Bart and me and Cordrex recently joined like yesterday or the day before. So there are also new people joining, which is great. And we also have Roger Leith, who maintains Asbuilt and Buildi and who's very responsive, although he does very much refactoring on the software, which is why we don't currently run the same version as an unstable, but yeah. In short, what did happen since then? We had some additions of new architectures like recently K3BSD into variants. We moved, well, one architecture array, which is M68K. And we were even asked to re-add an architecture which is the herd. I don't know if they, well, they claim they are working actively on it. They're still in unstable and in the official archive. So it makes some sense to re-add it. It was all pretty easy, especially the K3BSD one. It was very nice to work with the partners to do it. They made sure that the software runs on the system, of course, which is SCH root, which needed some adjustments. And yeah, so that's fine. But the main point is that we are currently still trying because it's not that easy action to synchronize the software on the build demons. Historically, every build demon runs some, well, version the build demaintain acquired at some point, which was either from the public SVN or some customized beta version that was not ready for public release, which is nice because it's actually running in production, but it's not ready to be released to the public. And now we're trying to get everything onto the unstable S build, which makes it easier for people to actually test with S build because there might be some problems interacting with S build. We're currently trying to figure out what the heck happens with Octave, which only seems to fail under S build. But yeah, it's better to have an unstable version that people can actually fetch and install and people are actually doing it and not only using P builder. And we are also trying to sync build D and that also works on the now large set of build demons, but not all are converted yet simply due to the fact that there are functional differences between the two versions. And we branched off the old public version and there were some feature additions in the non-public version, which are not yet synced, but we are hoping to get that done soon. And well, we didn't do much work yet on the server side, to be honest, it's still a mess with Pearl hashes in BDB databases, which means that there is no sane concurrent access. And we have nice things like shrooms, PHP built the states overview pages with most people use, but that's only synced every 15 minutes because it needs to put it from one BDB database into another BDB database format that PHP can actually read. So we're trying to solve this by moving to Postgres at some point and Roger Leith is actively working on it. But I don't know when it will be ready. So currently, we're still dealing with people in UNIX groups to get right permission switches in itself fine, but you get nice things when you're trying to do transactions. Thankfully enough, we got transaction support done by Joachim Breitner, more or less in the last few days, which is basically copying over the databases, blocking them and doing stuff, which is actually needed because there is no sane way to say, I do need a binary NMU, which the release team does very often, and I want to set a depth rate at the same time. So there's an inherent race condition if you set an NMU or schedule an NMU and don't put in a depth rate because in this time, the building can actually fetch the new build record and starting building and then it's for nothing because it's just separated afterwards anyway. We also try to be a bit more public, which means we got a BTS pseudo package to assign bugs to, which is mainly used by us to do some tracking of what still needs to be done. There's a new public list now instead of the older mail contacts. Dabby and W team at listdabby.org. So it's also archived if people want to read up discussions. And now there's a hash missing. We also got an IOC channel, which is quite active, which might also be a bit bad because well, it's not archived publicly and we are discussing quite some stuff there. Yeah, that's mainly to show who's currently involved. So we still have every architecture maintained by different people, which makes saying if they are actively involved in the part in question. And also because it's some work to keep ability running, router posited to his blog some days ago and while keeping the change routes up to date and similar stuff takes time and keeping up with the failed logs. So well, there are some architectures that are, well, let's say redundant in some way. There are other architectures where we currently don't have the opportunity to keep up with people not signing for a week, which could easily happen on some architectures. So basically we need to wait at this point if we don't know where the person disappeared to without telling us beforehand. But well, currently it works okay-ish. So yeah, that's also a count of builties. I listed Debian.org machines and non-debian.org machines simply because we want to move, I hope to have that earlier on the slides, to move them into DSA maintained machines to have control of the infrastructure, which is currently a bit a strange thing because we're dependent on some builtie maintainers which don't give us access to the machines. And then even if the machines themselves are donated to Debian, so we're just trying to get the same set of policies onto the machines, the same set of software, and it greatly helps to have the help of DSA to get that done. Well, we will take care of AMD64 in one way or another. DSA told us they have plenty of such machines, which I guess is true with KVM domains and stuff. There are still ARM boxes to DSA, well, mainly to ship them to other hosting locations because they're behind nets, which is not which DSA doesn't like. The situation for AMEL is similar because the non-debian boxes are mainly some development parts that are blazingly fast, but hosted somewhere at home. But at least we have got for Debian.org boxes which is, well, some kind of redundancy and it would be enough to run stable, of course, on them and security, maybe not keeping up with unstable, but we will work on that too. HPPA is a topic that's a bit difficult because people sometimes seem to care about that part, but sometimes do not. Well, actually, whenever you ask the mailing list, they want to stay included, but they only care, well, the last news they said, I say, are not interested in doing a release, just being an unstable, which makes it hard if you have stable bilties and one or two, well, those are more or less identical machines. One of them is very crashy. I think the other one is currently stable. Those all seem to be kernel issues. We had random segmentation faults on them which make building very hard because, well, it's like makesack faulting and you just can't give the build back and you don't know what happened. And we're currently even running one unstable bility on the potter box simply to keep up with unstable when one of the normal bilties goes away. So we need to solve that one or another soon, but dropping architecture is not really our call. And they are, well, K3 BSD, which is on the lowest. I don't know if one can see that. And that's being worked on to move it onto DSA. It's just that there's currently no installation media to do it in the same way. So that's currently pending, but being worked on. And there's S390, which is difficult to deal with. I don't know what happens with that if we get access to build these to actually do building on WN.org hosts. And well, with the exception of ARM, mainly, and MIPS2 are usually enough to unreasonably fast architectures. So with one box building and one box of redundancy, like on AMD64, it's perfectly fine. And at least for the current amount, if we integrate experimental, it could need some additions, but we don't know yet. Okay, there were also some maintenance changes on the file packages are specific. That file, which I want to show, it's not everybody knows this file. I'm, I have the feeling. So if you have a package that doesn't build on all supported architectures and are not able to just separate on a package, they need, like if you're having a mono package, which just needs mono to be available, which can be solved by an deprate, you should get the package added to this file, at least currently. And because the reason is that the package source used to put just arc any into the DSC, when there was an architecture, well, if there's a different set of architectures when you look at the binary packages, which means even if there are like an architecture, all package and then one binary package with just three architectures or something, it told you any and then it tried to build on every architecture that we have because one of them simply didn't have the information to do excluding. And that's why packages are specific was originally created. Now the package source is a bit more intelligent, but it's hard to tell if it's really done by the new deep package because the source format or well, some version numbers didn't change. Yeah, we will see if that continues this way. Mainly what it does is if an record is present, which can be either positive listing all architectures, or like here negative to exclude just one architecture, or maybe the better example is X arc, which is excluded with every single package, they need to be added there. So those need manual changes and the way we want them to be is that you open a bug against buildd.abian.arc. Actually it's written on the top of the file what to do, but it's still requests are still coming through various media. So we want to standardize on the BTS to do it, to keep track of changes and to have people commenting on changes if they are not saying or something. There is also integration of packages arc specific into the buildd overview pages, at least some of them, they are currently sadly out of date, mainly because they are still pulling from the CVS, I think, and did annotation about who did the change and that's not, well, it's possible with JIT, but nobody bothered yet to convert it. It should be easy though to just point it at a new file and then you don't get revision information, but I mean, well, for completeness I also listed the URLs, but I think it should be Google-able. So in the course of introducing new lists, the contact addresses for some points changed a bit for bin and amuse. It used to be Dabian release and it still is, mainly because when you need bin and amuse, it's about transitions, mostly, unless you fix up some quirks in like lip C6 because the package depth was a bit strange and the release team needs to be on top of that. But if you have give back requests that are covering like the whole set of architectures because the package failed to build on all architectures or something, the best way is to contact Dabian W team and we will tend to it. Mostly the same for dependency rates. There are differing opinions, though if the built-in maintainers should instead do them, but because there are varying levels of built-in maintenance, it's hard to generalize and I think the most same bit, mostly because dependency rates affect multiple architectures is just to contact Dabian W team. If it's about the software, where we are running, well, almost unstable as built, but not yet, well, I think Roger actually uploaded a built-in to unstable, but nobody tried that version because it's not the version we actually use on the built-ins. We are still taking care of built-ins somehow, but if you want improvements in as built, that list built the tools devlet, oh, that one is wrong. At least alias, Dabian org is the right place to ask and well, I'll file a bug against as built and it will eventually pop up on the official auto builders or maybe quicker if you ping us with a patch. We are currently maintaining a separate branch in the repository of our software, so it's all public, it's a built-in branch in the as built repository. Okay, and there's also the architecture aliases on buildd.dabian.org where you reach the set of maintainers I mentioned earlier. In fact, I composed that list from the mail aliases file and if you have problems with specific builds or specific architectures, if your package was built but somehow managed to vanish or didn't get a log, which is currently sadly more often the case on one PowerPC auto builder than it should be. Mine, mainly because the mirror is botched and as built can't deal with the thing that they are doing extensive caching and we have a mismatch of the packages files versus the other files. And then you can ask us because the point is WNW team currently doesn't have access to the builddys which is somehow taken care of. There was a recent introduction of a new group to have access to have at least log in access to the builddys but it of course turns out that if you're not in the right Unix group you can't read stuff, so it doesn't actually help on some builddys. Yeah, architecture specific gift bags are already mentioned I think, so. We also changed the format of requests a bit, which is mainly because the wanna build native format is a bit cumbersome. Dato wrote a wrapper which somebody might, a few might have seen if they are following release or WNW team, which is convenient because you can do stuff like specifying which architecture should be taken care of even with excludes in the same way, otherwise you would have to spell it out like wanna build, I need that build database from that architecture multiple times because you want to cover all architectures and well the format is actually if you do an NMU you say NMU, you say source package underscore version which is a pretty common format for wanna build the set of architectures and the reason. It's not that you actually need to do this if you request been NMU's, we will take care of it but if it's mass NMU because you're doing some transition for well programming languages that need been NMUing every time they get a new compiler or something, it really helps if we get a set of NMU's and appropriate depth rates. So for depth rates it's similar, there's even an example of the epoch which we use in that wrapper and they are the message is actually what we want to depth rate on. If you're more interested in that you can well look at the source of the tool which is available in the release tools, JIT I don't know if it's linked from there doesn't seem to be in that way or just ask on the list so we can give you pointers about how to do it. It's just more convenient for us but it's not a precondition to do so. Then I wanted to talk a bit about auto signing which is flowing a bit around as a proposal since the extra Maduro QA meeting. What we want to achieve with that is a faster turnaround for package builds which means that we can clear depth rates faster which is especially interesting if you have a long chain of depth rates with multiple levels which is often the case with packages that languages that need massive bin and a viewing like OCaml and Haskell. It speeds up transitions because like if you provide a fix for a bug you get your upload instantly if you want to get a transition done it speeds up like the OCaml transition because actually it doesn't take that long to get all the builds done which needs currently more iterations from building the package, the build the maintainer signing the log, the build the automatically uploading it, installing it into the archive and then clearing a depth rate and then start building, waiting for the build the maintainer so it can take several days or if people only sign three times a week it can take much longer which it shouldn't need to of course we still have to default each days of 10 on uploads but well if you have a transition almost ready and only waiting for some last fixes would be convenient. We propose it to do it for packages with priority optional and extra which means that packages with the higher priority still get review. This was actually I think proposed by FTP masters to do it like this yeah it's still not set in stone the whole proposal and I'm very open for input if people have concerns or proposals. And so other packages should still get checked and of course failed logs get made as usual so the only thing that goes away successful build logs for optional and extra which is the majority of the build logs you actually get and which people don't actually check that much that they may be used to do so it's not that if you sign a log you're looking through all the logs from beginning to the bottom you can do some regular expressions but I don't think we suffer that much from doing that because we have to trust the build anyway because it does the building and if the truth is somehow dirty with bad stuff we get screwed anyway. There's some preconditions we want to have to implement auto-signing which the first of one is a DSA maintained building which has the reason okay we have control over it and it's included in the usual monitoring checks by DSA temper checks if some people played with binaries on the host or something and what FTP master also wants to see is that it's not possible to download some malicious code from the internets which is of course saying because it would because it also affects the reproducibility of the build so we will limit at least inbound traffic for the normal build process I don't know if outbound traffic could also be filled yeah well it can but I don't know if that's really a concern and the machine needs to be restricted so it won't probably don't be on a potter box so the point was that we should also limit outbound traffic because we don't allow this anyway that's true but currently every buildee has got in the Debian archive has got internet access so all the failure to builds were actually discovered by Lukas in his screwed rebuild well that may be true now but it wasn't always true there have been cases where outbound traffic was for instance going over a very very slow line okay so yeah sure well we don't want it to interfere and I think if we limit inbound traffic you can't produce outbound traffic well theoretically you can but true there are also some practical considerations about how to do a signing and it was proposed to do GBG keys on the buildees that are actually subkeys and are rotated the proposal of DFGP masters involved rotating every three months which may be a bit often but that's still to be determined and we will eventually work something out the count state is also that they will need to be press-phrased less simply because it's not possible yet I think to query GBG agent if a key is actually unlocked because the protocol doesn't support it so if anybody knows something about GBG agent then if this is actually possible I do think that's actually one of the other reasons why I'm not very fond of auto-sign because you need to have a very gaping security hole on the buildee everyone who can get in there without leaving traces has a key to upload packages to the archive and nobody will know well the point is to have a restricted upload queue that only accepts build from the buildees IPs that would mitigate that somewhat, yeah indeed of course you can still upload from I don't know project machines or from buildees I don't know how the restriction will be done especially with dynamic IPs to upload them to some sort of jump host or something yeah of course that's a concern but we want to rotate them often enough and we currently already have some sort of issue with the successful build log with the path it takes from the buildee to the buildee maintainer which isn't entirely secure that's true and if you don't watch I mean I have appropriate checks in place that it doesn't send anywhere that's not buildee but I don't know about the policies of other people as far as I know there are no policies yeah right well data deposited at some point to some list and the keys should actually be architecture limited and oh I didn't write that and should not be able to upload source packages so the most you can do is upload a malicious binary for one architecture which is of course well it can have an impact if you get AMD64 I386 for example it can get very bad but I think that should limit the exposure quite a bit okay and I'm almost done with what I would wanted to talk about before asking questions but I wanted to say what's next we currently have to move away from devion.org because it's shipped to another hosting location and it can be down from one day to the other if HP decided to actually move the machines so we should do that within the next one or two weeks DSA doesn't know when the move will happen we really want to integrate the unofficial auto builders into the devion.org setup which is a bit of a pain because people are uploading packages into experimental that aren't actually well buildable or something at least the unofficial infrastructure it doesn't make very much sense to really care about the build failures because they are so diverse and you don't have a consistent set of packages and well and even to do the same with volatile and I don't know what the state will be with backpods because I don't know if it will be integrated into official infrastructure I think it's planned but I don't know yeah further putting buildies under DSA's control which affects ARM and F390 further bringing the software in sync like installing it on the remaining devion.org machines for example which get pushed by puppet whenever there's a new version available in our archive and pull WannaBuild onto a sane database that's actually queryable I mean that is actually a script running from Kron that exports all the WannaBuild data and imports it into a Postgres database which is then in turn used by release tools to determine what transitions need caring but that's obviously not a solution for the future. Okay so questions and comments? Anyone? Well it's just an idea of something that would be useful to me maybe later. Some were related to the recent problem where we had with bad symbols files. It would be interesting if we could log somewhere in buildy when each package log or keep it in a database or something that could be easily integrated in UDD so that we can query which package have been rebuilt on which arch between two specific dates for example and in the same spirit it would be so interesting to be able to know the set of package that has been rebuilt with this specific version of DPKG dev or stuff like that because... Yeah sure the account problem with that is that the transaction log that's produced by WannaBuild contains security related changes so we can't just open that up. Well, WannaBuild does keep a state change date. Yeah that's a transaction log. Yeah okay. Well it's hard that way but it's not a transaction. You should be able to use that to figure out what's going on but if indeed the security is also, it's not solid in there. It's on a persuasive basis that trend. No it's not. Oh boy. Of course it can be filtered and opened up in some way if you really are interested in doing that it shouldn't be much of a problem to generate a crunch well to do a crunch up that just fill us the transaction log and put it into slash stats and then fetching it from there. It's not a really nice solution of course but that's what we get currently and it wouldn't be a problem with Postgres because we get our state changes tracked there. But wouldn't it be enough to simply have the list of installed packages in the build log so you can just grab the build log for... Well yeah of course all information is there you just have to do the data mining. I mean we have all the logs of course and... No we don't have the list of installed packages in the truth. Just one point. I think the logs is good but we don't upload logs for what maintainers have uploaded so we don't have whole build logs. It depends on the version as both you're using but recent versions well some recent versions of some branches it's a mess will actually tell you at least for the toolchain packages which versions are installed. Not everything of course that's true but at least the toolchains. Okay there's of course a point that you can have packages installed that are not toolchain and do not appear in the build log that's true. Right. But that could be added easily to as well to produce a deep package output of what's currently installed if you need that. Which yeah. Somehow I have a wishes bug in DPKG dev that requests to include that information in the chain either in the change file or in the log. Could it be that we find a way later to upload the logs of the... Well together with the source, the logs of the maintainer. Well the point is to discard the binary and rebuild it. Yes, okay. And then we take care of that automatically. We don't get source only uploads but we will eventually get the binary discarded before after FTP master applied some lynching checks on it. Okay that would solve the concern, yes, right. Something else. There are often problems about outdated build truth or I mean slightly outdated build truth or unclean build truth. Have you considered using LVM snapshots because the package has built does that using S truth? Considered, yes. Actually getting it implemented is something that involves DSA because hosts currently don't have LVM setups and I don't know the current position of DSA with relation to LVM on build truth. And you can of course have the point that your snapshot is actually exhausted. Well full, if you don't make it big enough. And it also doesn't solve the problem because then you just move it to having to update the truth to having to update the LVM snapshot thing where you base yourself. Yeah but you don't get it unclean. That's like having a Tata GZ you unpack every time. That's right, that's right. I mean it's still, there's still work to be done and it's not actually solving the problem. Well plus you add a lot more work for the build even which would have to update the specific package every time as opposed to just once. While that's true, that should only be a problem on IO constrained build truth. That's true, that's true but it's still there. Yeah and IO constrained build these are actually in use on Amiel which has, well the development boards are actually fast I think but the IO is bad. So well theoretically one could do unpack like one common state and upgrade it every time but of course it would do IO turn but would actually solve the problem except for the fact that sometimes you break your environment. It also makes it easier for an architecture to be in trouble if it doesn't have enough build demons because it needs to do more between every build it's wasting more time which shouldn't be a problem in the usual case but when it is a problem then you suddenly need new hardware sooner. Yeah sure but well on the fast I can it's some kind of trade off. I mean on some architectures it's no problem at all they are mostly idle anyway like 90% of the time they are idle and on other architectures the slow ones that's so a huge gap I think we could handle them differently if we want to I don't know if it's also a general case. I mean if we still have the problems then just only on a limited set of architectures in the other ones would we build it anyway in a quick fashion. That's right yeah. I don't know where we are time-wise probably well ahead but 15 minutes according to what I watch. Ah just saw something in the back. Okay so nothing left we want to ask. Comment upon. Wookie. What about automating Lynchian checks at the end of the build? We won't do that at the end of the build in my opinion it should be done on FTP master and that's because otherwise you have differing checks between build these some build these maybe do it some don't do it. It also depends on if you allow maintainer uploads or not if maintainer doing Lynchian checks on their packages so I think it should just be done on FTP master. I don't see the point of doing Lynchian checks at the build which takes IO and takes time and don't see why. Wookie wanted to ask something. Yeah this is kind of a little bit off topic really but I've got a rather crappy build D for cross building things and I guess the question really is who's the best person to talk to to work out how to make the wanna build system and actually understand how to cross things? Well wanna build itself should just work because you feed it sources and packages files. Exactly it's the sort of scriptifying part that the S build part I guess that needs to and then you should talk to watch your lead. Sorry who? I don't know how to pronounce the. Ah right okay. Sorry. Who's the S build maintainer you can file a bug he used to well he is pretty responsive so I think you can work something out. Yes. So. Just one question. There's been some talk in implementing DDEB support in Debian. Do you know if there's any plan or the plans of implementing that and S build and such on the build these? It doesn't affect it. That's my stance on it because the actual proposal is to add DDEBs to the changes files to add DDEBs to the normal build process so it doesn't affect abilities. You don't need extra support in S build because it's just done in unstable for every package. Well I don't know if it will be implemented this way because it blocks probably on other people but my take on it is that you don't need extra support because you don't put them somewhere differently and you can just handle it on FTP master if it comes in and move it somewhere else if you don't have enough space on FTP master something. So that should just work if a version of whatever toolchain package is responsible for doing it. I think DEP Helper possibly. Yeah, I know. Does it then it gets included in the changes and we are done. If you get rid of the binary packages the maintainer uploads do you get rid also of architecture or packages? We could. Well, we don't know how good we are able to all to build architecture or packages but it would be no problem to just assign it to one build D. Well, especially in the case you are rebuilding every anyway, you can just say like on I386 also build an architecture all package. Of course it gets a bit hairy if you have you need to implement architecture affinity because some architecture all packages actually need to be built somewhere else like on Spark or PowerPC I don't know there was some firmware thing that has it and we don't know if it affects the content of the architecture all package if you build it on different architectures but those are bugs anyway so we need to find them at some point. Okay, thank you for coming despite breakfast.