 Anyway, I wanted to give a short talk or actually a buff, which I'm not sure how it will work. So to basically explain what my Google Sum of Code project is, what I was doing there, and some other ideas for dark I had. And well, to basically discuss what other people would like to see in dark, which doesn't work that well, I guess, today. Well, no, it doesn't like me, but it was working before there. So there's basically four parts. The first part is, well, a summary of my current Sum of Code work, which is to introduce multi-archive support in dark. The other topics are the Debian maintainer permissions. People might know that I wrote a mail to the list about this. Then there's an idea for a remote control interface for dark, which basically was already introduced in the Debian maintainer changes I suggested, and how we would like, what else we could do with this. And well, then the part that probably won't work that well today is basically I wanted to know what other people would find useful to see in dark. So this is about the multi-archive support, and yeah, inside dark we have several places we install packages to, which are suits, policy cues, and build cues. Suits are basically what people see, that is unstable, experimental, stable testing, also like stable proposed updates. Policy cues are the new cue, or moderation cues that the release team uses before packages are accepted into proposed updates. So it's proposed updates new. And build cues, this is basically incoming Debian org. So it's used to give packages to the build demons before the next mirror sync, so that they can already use them, and don't have to wait six hours. So currently, or well, in the current dark code, basically there's extra code to handle each of these. Each of these implementations has their own bugs, so it's a bit annoying to have three of them. So for example, the main archive doesn't use up to FTP archive anymore, but it's still used for the build cues. And also by the decision which files to keep, it's basically re-implemented three times. So it would be nice if we had a single implementation that could handle all of this. Then there, we would like to merge basically backports Debian org, which would reduce the administrative overhead. So because we have one less stack installation we need to maintain. And then there was the idea of introducing developer repositories, which is something PPA like, but not entirely the same. So for example, one use case for these would be staging transitions before uploading them to unstable. We have experimented for this somehow, but it doesn't work that well because if it involves several packages and you would like users to test them, then they would need to change pinning for all of them. So yeah, some teams already have their own repositories. For example, I know that the KDE team has. So maybe we could integrate that into the official archive. Also maybe things like Mozilla, Debian, that also. Yes, the plan was basically to add support for multiple archives. So instead of just having what you see as FTP Debian org, that would also implement additional archives, which looks similar, but are either not public or exported to a different URL. So like backports Debian org. And a private archive, which looks basically the same, but is not exported to public place, should be used to replace the build and policy queues. Actually, there are now three of these private archives. One is for new and by hand, which is a bit special for various reasons. Then there's an archive for the build queues and another archive for the remaining policy queues. Yeah, and if you want to merge backports Debian org and keep it on a separate mirror network, then it could just become an extra archive in the setup. Yes, the current progress is basically, I have a working version, but it is not merged yet. So in the process of writing this, I have actually rewritten a fairly large part of dark itself. So basically anything that handles uploads. This is process upload, then to move packages from the policy queues into the archive, there's process policy, which is also rewritten now because basically there are no special policy queues anymore. And processing new queue also had large changes. But for the build queues, there's also managed build queue, which was also changed a bit. It's now rather simple because most of the logic is now we don't, well, it doesn't need any special logic anymore. Well, then you might wonder why I've rewritten these parts actually. It would have been possible to basically hack it into the existing code, but I was getting annoyed by it because basically the main problem with the existing code, especially process upload, was that if you change some parts, it would not unexpectedly throw an exception at a later place. And it doesn't really handle these exceptions. So it might have already installed some files, done some database changes, and if there's an exception anywhere later, then it wouldn't revert these changes, which is not very robust. And it would be nice to have the archive and the file system and database always in a consistent state. So I decided to simply rewrite these parts. Yes, it basically works. I have done a lot of testing during DevConf and DevCamp and fixed a lot of errors in the new implementation. Sadly, there's no test suite for dark itself. It's quite annoying to find them. And I'm now preparing to merge the remaining parts that are currently not merged. So some parts are already in the official repository, but basically the entire rewrite is not there yet. And there's still some minor features that are missing. I don't think they are important to have right now. So it's basically like things we would like we like, but it's not essential to have them. And there are some security Debian org specific parts, so the security archive cannot be updated right now. For details, you can always see my post on the mailing list. It also has all the comments. There are also the where else to access repository with my current work in progress. Yes. This is how it now looks like. So basically, you can see that an upload to new now also appears in the output from dark LS because it's just another suit at this point. And in the second list, you can see that a new package that was just accepted from the new queue is now an unstable in the build queue for unstable and in what is used to generate incoming Debian org. Yes, maybe we can look if there are any questions on IRC so far and we could. Okay, so basically there's an entire repository at FTP Debian org. It has multiple suits like unstable, experimental and so on. This is this and the pool directory with all the binary and source packages itself. This is an archive and multi-archive support means that a single installation of that can handle multiple such sets. So currently we have in Debian three of them, which is the official archive at FTP Debian org, then backpots Debian org and security Debian org. Security Debian org is a bit special. But we would like to merge at least the first two of them. Oh, somebody volunteered to write a test suit. Or was volunteered. Okay, so apparently there are no questions so far. Then we can go on to the Debian maintainer permissions. This is nothing new. It's basically what I said on the mailing list. So currently to allow Debian maintainers to upload packages, we use the Debian maintainer upload allowed field in the source package. We take its value in either unstable or experimental and check that the Debian maintainers also listed in the uploaders or maintainer field of the last package. And in addition we also check that he's also the mentioned in the change by field of the current upload. So we cannot sponsor other people updating packages. So there are various reasons that this should not be in the source package. The main reason is that it's not related to the source package as such. And it also makes no sense for example for derivative distributions because, well, this is a Debian specific field. So yes. It also applies to, well the current field would allow all people listed in the uploaders and maintainer field to upload the package. But we feel that it should be a setting for to only allow, well specific Debian maintainers to upload a package, which is only a time change. Basically I have the impression that the original GR does not, well, really take de-maintenance into respect. So I have the impression that was not sort of at that time. Yes, also we would like to basically get away from matching user IDs. That's a bit annoying because the archive only knows about a single user ID and sometimes this is not the same user ID people would like to use. Which basically means you have to ask the FTP masters to change the user ID dark expects. It's not so nice. So instead we would like to just use a key fingerprint which is always correct. Also, should we introduce developer repositories? It might not make sense to basically tie the upload permission to unstable or experimental because say the KDE team stages a new KDE release in their own repository they might already want to allow Debian maintainers to upload the packages even though they are not yet an unstable or experimental. So instead we would like to introduce an interface in dark to control the DM permissions that is not tied to the suspect itself and entirely drop anything related to user IDs. So basically the current idea is to introduce some sort of command file similar to the command files decad users which would basically contain several stanzas what should be done. So the first section is basically a bit boring. It says which archive this control message is for to where should anyone else run dark that the message could not be relayed there. Then an address where any response should go to. So you can choose which mail address you use. And then basically there's a command field. So which action to take this would be this would mean that it should handle Debian maintainer permissions. Then you would give the fingerprint of the Debian maintainer key. And then you can basically say which packages to allow upload for and which packages to revoke the upload permission for again. Then there's a reason field which I think is more useful for denying upload permissions again. So basically other people have an idea why this was changed. I'm not sure if it should be mandatory or not. But we can see that later. Yeah. So now we have a nice remote control interface for dark. Just using it for Debian maintainer permissions would be a bit boring, right? So once we have this, we can implement some other things that people have asked for, which is basically the first point here. That is to move packages from experimental to unstable without requiring an upload. So this is useful again for things like KDE where they first upload everything to experimental, see that everything is built and then currently do a new upload to unstable which doesn't change anything but requires all binary packages to be rebuilt. Yes? Sorry, don't you need to do an unstable rebuild to be sure you've got the right set of libraries because the set in experimental isn't necessarily up to date? So you really should do a new upload built against the current libraries. Well, when you build in experimental, it usually takes only packages from unstable anyway, except for those that specifically require a newer version. So people would have to take care of this. It would be a bit easier if you have your, like if you have the custom developer repository, then you can ensure that there are no unrelated packages involved. Yeah, that's the second point that we would then also like to allow moving packages from developer response to unstable. And well, as developer repositories are managed by the people who operate them, we could also allow removing packages from there without having to file a bug against FTP, Devian org. So basically that's what. And well, I have no other idea what we can do with this, but maybe other people have. So we can look if anything has happened on ISE. Transition Stunning Experimental, people are happy for no shadow. Did I understand correctly that they are rebuilt? If it is moved, they are not rebuilt. So I'm not sure what we do want to do there, but we could basically allow on also source on, only movement between suits, but then you would have the building network would somehow need to know that it needs to change the binary version. So because otherwise if you upload a dash one to experimental, then move only the source package to unstable and then the buildings would upload, build the binaries for that. They would have the same version at the binaries in experimental, but be different. So the archive would reject that. So... It's for the building things, for the migration. We can just specify the sort of set of rules we use for can things migrate to testing. You check whether it's built against up-to-date libraries or not. If it isn't, then it can't migrate. And you do need to do a new upload. Well, currently we don't really care about dependencies in dark. So that's only handled by Brittany for testing migration. So there would be an entirely new feature. Okay, Pixelpabs suggested that, well, after a source only movement, it could be handled like an automatic binary non-maintain upload that could probably be done. But, well, that would be something for the building people to implement. And until they have decided if that is reasonable or not, we could probably only, well, start with allowing source and binary movements. Okay, this is already basically everything I would like to say. So I was wondering how many people have actually used either dark or project B, the database behind that itself. But there aren't that many people here to ask. But who of you has already used dark or project B? Okay. Yeah, I've used that. One of the things that I'd really like to see, and maybe you guys have already started down that path, and I've heard some noise that maybe you have, is making it much easier to install and set up DAC on a specific instance. I mean, currently I ripped things out of Git and then spent some time massaging the PostgreSQL files and learning to ignore some of the things that are in the configuration files in the Git repository that don't actually do anything. And so it would be really, really useful, I mean, to me anyway, to have either just a DAC package that you can install that did the right thing, even if it was behind a bit, what Debian was using itself, or at least extremely good instructions on what to do to get a running DAC instance. I mean, I assume you guys do some sort of testing or something and so some sort of system that would set it up, run the DAC for a bit and then do that. When did you last try to set up DAC? I set it up in March. So have you seen Max read me how to set up DAC in the set up directory? I looked at one of them, but I haven't looked at it recently. So I probably need to go back and check that out and see if it's up to date. You wrote that sometime last year and basically explains how to set up DAC. Also includes all the necessary steps to generate a minimal configuration file and basically just blindly following it should work to get a DAC setup that has an unstable suit with two architectures. Yeah, I think I looked at that and started working with that, but like I needed some of the other things, like I needed a testing and I needed a stable. So yeah, I think that helped me. I think I do remember this document. I do remember reading it and it helped me get started a lot because I wouldn't be able to, I mean, I basically would have been harassing people on IRC all day long, but it would still be nice to have the configuration file more completely documented. So you have some sort of idea of what you're actually doing and when it breaks, whether it's because you don't understand what the configuration file does or if it's a buck, something like that would be. Sure. One other thing I was considering, basically one of the problems with the test suite is that there are some unit tests, but most of the functionality of DAC relies on the database. So my idea was that for testing these, we would basically need to set up a clean database for every test to ensure that they don't influence each other, so we would need to somehow automate setting up DAC for this. And I think this would certainly also have other people that want to try it out or an eventual Debian package. So I've always used Repri Pro whenever I need to do this kind of stuff and I haven't really understood why I'd ever need DAC rather than Repri Pro for doing the equivalent, right? Well, I'm not sure it has all the features we have, for example. Probably not. And I'm not sure how well it handles very large repositories. So the problem, for example, with FTP Archive, was that it didn't scale that well. That is crap, yeah. But I've been very impressed with Repri Pro, it does a pretty good job of stopping you doing stupid things which shouldn't be allowed and it's relatively easy to configure and arrange pools between different things and updates between different things and processing incoming for different things. So yeah, I don't know. It does the job for me, so I've never really looked at this because I didn't need to. And it's quite well documented as well, which is also useful. Yes, that's one of the problems. The other things that, well, we basically need our integration into the bug tracking system and such things and I don't think Repri Pro has these. Actually, it has quite primitive interface to build these stuff. It does have one, it's not very smart. But again, for a lot of people, I think if you need to set up a repository for a purpose, it doesn't have to be that complicated. Those tools work quite well. I don't know, and I guess it will be nice if we didn't have to maintain two different repository management systems, one of which is really complicated and nobody understands, and another one which is not quite capable of doing the real job and everybody else uses. Yes, but one of the other things you have to keep in mind that are different between these systems is that Repri Pro, I think, has not many external dependencies. While for dark, you need quite a lot of other things like PostgreSQL with the depth version data type for PostgreSQL and lots of other things. Yeah, I mean, in my case, I'm running DAC because I have one a build running for two different architectures building on three different suites for all of CRAN, which is our archive network. So I have probably 5,000 packages for each suite, so it's a little bit beyond what I thought Repri Pro could do. Yeah, exactly of scale. So it will be interesting to know whether that would, in fact, work or if there are important parts missing. And so what people want to know is at what point is this tool not suitable? And I should look at the other tool. What are the limits? I don't know. I've never hit the limits of a Repri Pro archive, but then I guess my biggest one is the Mdebian stuff, which is 3,000 packages. I'm not sure either. I personally use Repri Pro as well for some small. I've been impressed with the responsiveness when I complain about stuff as well, which is always nice in a package. Stuff that actually gets fixed. So I don't want to say that this is not useful, but I just try to understand the range of application of these various tools when I'm not sure many people know. I am not sure either. But well, you could try out with the official Debian archive with all architectures and suites. I'm not sure how well Repri Pro would cope with that. OK, then there was a question about source-only uploads, which you're already was not so happy about. Yeah, he says, sort them. So basically, we don't want that as a policy decision. From a technical side, it's basically the problem is building the architecture independent packages, because these are currently always taken from the maintain uploads. Are there any further questions or comments? So one thing we're looking at doing because of the cross-compiler work is having dependencies between architectures, which is something we've never had before. The dependency set was closed within an architecture. That was for the cross-built toolchains, right? Yeah, because toolchains depend on the libc from another archive, or they could be made to. And I guess, actually, maybe that's a Brittany thing, isn't it? It's not actually something that has to worry about. Yes, that's mostly for Brittany, so testing migration. Dart doesn't really care about dependencies, besides that it checks that Python app can parse these fields. So they have to be syntactically valid. Everything else we don't care about. So, well, and the others, it's, of course, a policy decision if you want cross-architecture dependencies. Yeah, I was just, I wanted to check that we didn't need to change anything here to make that work. And as you say, it just needs to be syntactically valid, which is already done. So that's in Weezy, in fact, even though we're not planning to use this yet, but we've made it possible to do this in Weezy so we can try. Yeah, personally, I think it would be OK, but I'm only a FTP assistant, so you can rely on that. And, well, of course, first, Brittany needs to be addressed. There was a question about any plans to get something towards security, Debbie and Ork having other upstream tower balls than FTP, Debbie and Ork. Well, Jörg said, merging the archives, but I'm not sure when we will be able to merge security, Debbie and Ork, because of the obscurity factor there. Basically, for embargoed packages, we don't want people to know that there is an embargoed package, but there's the read-only version of Project B on Rease, which every Devin developer can access. And it has a changelog entry and lots of other interesting things, so that is a problem for this. So Jörg says, in a decade, the other thing we could do is maybe in some way teach security, Debbie and Ork, about which upstream tower balls exist in the official archive, and only ensure that if an upload goes to security, Debbie and Ork, it matches the hashes. Are there any other questions? So I missed the beginning of this. Is this now packaged so you can just get installed back, but the problem is that you have to set the database up and we haven't got to make it. But there's no package yet. There isn't, right? Are you planning to? Maybe. OK, because I think one of the things we've done badly over the years is, although all our infrastructure uses free software, it's been very hard to replicate. All the distros do this, and it shouldn't be really hard to set up an archive with a build D, and everybody who makes a derivative distro or is using it in a commercial project, they all have exactly the same problem, which is that they usually want to have their own repo control, which means they need to set it up and all that infrastructure. And we've made it far too hard. It's getting better with this work. But I think packaging stuff and being able to initialize stuff is a really important thing. I think it would be nice also to get other people to use Stark, because, well, if more people use it, maybe we get new contributors. But I think before we can strip a package, it should at least be easier to set up. Basically, that the package maintainer scripts handle the largest part of the setup, at least for common installations. You're quite right. And the same problem with build D is that there isn't a method for initializing a new database from nothing, because we always have one. And then it was done once 10 years ago. In the current version of Dark, at least, ships the database schema. Ola versions didn't have that, apparently. So you basically needed a running deck setup to be able to even set up another installation. Exactly. That sort of problem we've dealt with very badly and makes everybody's lives difficult. I guess, in fact, the side effect of that is that one of the things I'm actually prepared to work on is making it easy. So I have slowly been working on making it easier to set up a build D, basically. Now, at the moment, I'm doing that with ReprePro and RebuildD because they're simple. But it will be better if I'm waiting for the S build DB stuff to be finished, because I think that's a lot more tractable than want to build, but should ultimately have the same functionality. That's the plan. So it will be nice when all that kind of comes together and it gets to be really easy to just make a new repo that builds stuff. Sure. Let's see if there's anything new on ISE. Some comments about the security DB and org and the initial setup. But I think we covered all of that already. So I think that's it, I don't know. So well, it was nice that at least some people are on ISE and that a few more people here appeared as well. OK. Thanks to everyone as well.