 So an FTP team is here to talk to you about And Alexander Schmel and Mark Heimers Hello Yeah, we're gonna talk to you about a couple of things today. We're gonna talk about the roles of the FTP team within With in deviant We're gonna talk about the roles of the FTP team within deviant Alex is going to talk about the new queue and then I'm going to talk about DAC So we're gonna start by Introducing the FTP team as a whole there are two groups of people similar to the release team and various other ones now We have people in the FTP master or that's myself Is that to me or somebody else? Myself Mark Heimers your guest, but again I've done there toss and burner and as FTP assistant here I've done the right right to Schmel Mike O'Connor Ansgar. I'm sorry. Ansgar. I can't And and Luca who's surname I also can't pronounce As you can see they let the only English person pronounce all of the German names Alex is going to start by taking you through a lot of the things that we see in the new queue But there are other roles for the FTP team also our main roles to make sure that the archive software keeps running that updates Come in they get processed they get checked against the GPG key ring They get sanity checked and then they get placed in the right location The other role main role of the team is soon To deal with keeping the archive legal that used to simply mean looking at copyright licenses making sure everything was notated Properly that we understood we didn't have anything really dodgy in there That is becoming slightly more difficult recently with issues to do with trademarks and patents There's going to be various discussions through debcom form that and obviously with sacks Work on the patent policy that FFLC did recently that's all coming into the fall The other things are supporting teams that depend on the archive the release and security teams are the obvious And finally updating the archive software as and when we need to What do we do well as I said we do the new queue we decruffed the archive so some are some removals aren't automatic some are We deal with override changes and we deal with the archive maintenance We don't and this is a sometimes a misunderstanding. We don't control the contents of testing the gentleman sitting in front of me They're the release team control the contents of testing So when it goes wrong blame them and we don't deal with the build demons We don't deal with binning amuse. That's the build the team and I can't see Phil Kern. Where is he? He's missing, but that's all his fault effectively So you need to complain to him about that although we do provide mechanisms which help with some of these these areas I'm going to for shameless self-praise hand over to Alex who enjoys that sort of thing. Well Thanks for the Thanks for the introduction And I started by showing you the current state of the new queue Which is Nearly invisible. I think I think as we are speaking we have 20 packages in the new queue and that's And that's just because I wasn't able to Reject or affect some packages over the wireless LAN Yeah, as you can clearly see here's a small line When squeeze was frozen we have a continuous Upload I think at that point someone complains that the new queue is growing so huge So we started to accept some package Until I think about here We accepted a postgresql version which kind of broke the archive When the release team asked us not to accept any new packages anymore So again, we have the usual quotes during the freeze and here at that line Squeeze was released You could you can see also Well, it took us two months to recover from the release and then just Three weeks to clean the archive I think if you Summarize the two stakes we had about 900 new packages Filling up the new queue doing the freeze and Well, we pretty much managed it to accept everything thanks also To the other people sitting there Especially Ansgar who joined the team Quite recently and processed a huge amount of packages if I am not mistaken So Yeah nearly invisible new queue so Well But nonetheless still packages end up in new and Need to be processed oh Forgot something sometimes the release team even tells us we are processing packages too fast and they don't manage all their transitions because We add new packages So that's pretty uncommon to us. We are not used to hear these kind of compliments. So Now after the shameless Self-praise What's up about the new queue? Well as we've mentioned already One point of the new queue is to keep the DB and archive legal As a side effect we also check if the packages meet the DB and free software guidelines some of us might be familiar with that document and If they comply with the DB and policy You some of you might wonder if there's a difference between complying with the DB and free software guidelines and being legal legally distributable But in fact that are two different points because Some piece of software can be free software But still be not distributable think for example something combining GPL license code and SSL license code You can distribute that resource but not as binary if I'm not mistaken So and While we are at it We also try to check a little bit is it if it's actually sensible to have that package in the DB and archive So in theory there shouldn't be any need for Processing the new queue because we all agree that that's a sensible things to comply to aren't we Okay so Let me show you some examples From some recent rejects over the last two months, I think A common error not all licenses are mentioned in your copyright file I Think there's hardly any Package in the archive which consists of only one license. I think that's the minority of our packages So if you list only one license in your copyright file You might better check it again and In the dev scripts package, there's a small helper called license check which can help you listing Different licenses and copyright holders of source code So I think that's the most common error That's the most common reason for a check from the new queue So next example Everything must meet the DB and free software guidelines I Didn't invent that example. That's an actual package. I found in the new queue. I think two weeks ago and It contains something copyrighted by something called the Microsoft Corporation I'm pretty sure That this doesn't meet the DB and free software guidelines so Coming to our next example Stuff missing its source code often we find some kind of documentation PDF files Which were not edited directly which were not created directly by writing a PDF file, but With some other source usually tech sometimes open office files or Libre office files and Sadly, we often we often don't find the source code of it So as The DB and free software guidelines say we need the source code of it That's usually a reject With the introduction of the Debt format sweep dot zero you can also ship the source code of some documentation in the debian dot tar dot Gs G that well Debian tar file and I Think we had already one case Where there was a piece of PDF documentation in the package the source was not in the OICTAR ball, but in the Debian tar ball Everything was built from source. So actually it was fine, but Some FTP team member didn't notice that the source code for the documentation was not shipped in the OICTAR ball and rejected the package So if you do this kind of stunts Shipping source code not in the OICTAR ball you should mention that in the copyright file and another common example is That you have some kind of some kind of source Source in quotation marks, which is actually not the preferred form of modification For example Java script libraries are often minimized to strip them of comments and Everything else to have a minimal version And if you then look at these files you have just one single line containing all the code Well in theory that is source code. Yes, but it's Not editable You just can't go there and change Fix the source code or whatever. It's just not the preferred form of modification Even if this javascript library is not licensed under the licensed under the terms of the new general public license We still want to have The thing you can actually edit It has always been the interpretation of the Even if you suffer software guidelines and it will ever be Because we still want to modify it so with With increasing amount of web applications being packaged for Debian, that's also getting quite the quite common error and Then are some corner cases Where you find some kind of file and actually no one knows what kind of file it is In this case, it's some kind of data and well, it's some kind of Numbers I have no idea what's what's the origin of these numbers are what their meaning is and what's the How these numbers are usually edited? We have said Kind of often I would say not in that form, but also in other forms where we get some data files sometimes You can't even use them with an ASCII editor So What we usually do then is to ask the maintainer What the hell is that? So instead of rejecting it at once we just asked what is it? How would I modify it and If we are satisfied with the answer we accept the packages, of course if not, well, we can still reject them if You want to take a shortcut mention it in your copyright file You might not know it, but The fdp team actually reads your copyright file and if you have some high and some hint added there What is that file? How do I edit it? How do I edit this with the pack with the software available in main? You can spare us one brought and we can accept your package right away and We have an FAQ document with a frequently asked questions Why are some packages rejected? What you can mix with what license you can mix and which license you can't It's all documented On our fdp master side With that I hand over to Mark again who will tell us something about duck Well Alex was going to let me say about that he was meant to talk about removals and d-crufts, but I'll do that instead Most packages get Most packages get removed automatically through a mechanism called domination which removes older versions in a suite All the versions in a suite once the maintain uploads a new one Brittany transitions it to testing or so forth There are corner cases Where we can't do that automatically and therefore we have a tool called that cruft report This gives us a hint as to what we should look at removing The reason we don't automate that is because there are times when we want to leave around old binaries of certain in certain cases So Ben Hutchings uploaded the 3.0 to kernel yesterday. It came through new. It's gone into unstable That cruft report is currently telling me that I really should be removing all of the 2.6 2639 kernel binary images from unstable at this point because it's looking at it going Oh, well, you've got a 3.0 source package now So and we don't need the 2.6 one anymore. This is the reason we don't automate it I think people would get fairly unhappy if we at this point throughout the kernel binary images and broke all installs of unstable When cruft report also deals with a few slightly more difficult cases such as source packages getting Renamed and taking over binary packages The we often need to coordinate with the release team because this can quite often have an impact on transitions between Unstable and testing and it may be we need to hold around older library versions slightly longer than we would otherwise like to The final cases are binary packages built by multiple source packages usually to do with a move sometimes lack of coordination sometimes other reasons and finally newer versions in unstable. So for example cleaning out Cleaning out old experimental versions or testing and testing proposed update versions, although earlier today we have fixed it so the release team can to Can deal with the removals from testing proposed updates and we no longer have to The explicit RM bug reports tend to be things such as I no longer wish to maintain this. It's not interesting It's been merged into another package and that's just usually filed through report bug Binary package removals on certain architectures Although we try and support all packages on all architectures There are cases where for example the toolchain breaks and we wish to migrate from unstable to testing Brittany often won't consider this unless we remove the older binaries from unstable on the architectures where it's no longer building Again, this is a partial removal which sometimes needs to be done manually Overrides Thanks, Alex the override mechanism This is what actually writes the section priority headers into the packages and sources files rather than what's written in Debian control When something comes in through new it copies the information from Debian control into project B the attack database Which I'll come on to in a moment But the information that's written in the packages and sources file is actually then taken from the database To change it therefore we need bug reports against FTP debby nog This is partially to prevent things such as I don't know Libre office becoming priority required Which would break the bootstrap it would break other things we you would start dragging in a large amount of packages You really don't want to There probably are some questions about sections in terms of whether they should be being replaced with deb tags and all these Sort of things in the longer term But we keep adding new sections because I've been saying that about deb tags since I think at least deb come from Edinburgh And we're still using sections At this point There are a list of override disparities available on the FTP master website Although as the maintainer you'll get a really annoying email every time you upload if there's an override disparity Suggesting that you either correct the package or contact us to have the database updated Okay, I'll get into the bit that I was I was meant to be talking about DAC This is the Debian archive kit. It was renamed from Katie Which was the initial name of the Python version of de install written by James Troop and co many years ago Which replaced the pearl de install script, which is where the terminology de install comes from the terminology used by DAC is pretty much the same as That used in policy except in one case which does cause some confusion occasionally a Sweet is something like old stable stable testing Policy cues are a fairly new invention in terms of being generic within DAC Not just new but things like pu new or pu new things that the stable release managers or the Testing release managers manage themselves as staging areas so that packages don't land straight in there This is usually used for preparing a stable release The one way area where the for historical reasons the terminology differs is that we call main-contribute non-free Components and we forget that other people call them other things some documentation in Debian actually calls them sections And that is really really unhelpful because we we get very confused with the actual sections such as admin doc debug Python Well, that's all the thing Priority everyone knows about Unchecked you'll hear is referred to unchecked quite a lot This is simply a directory on the F on our Frank Debian org current FTP master server where dump everything that hasn't been checked for sanity This is where the QD eventually puts all your files DAC is very much something that's been evolved It wasn't it was designed originally, but you can actually see if you look it was designed around the files that apt expects to see So if you look at the release file format documentation, which is the app source code You will then see that the database was designed to contain those fields The Python code was then written to generate a release file from the database It wasn't designed to walk. How do we want to represent a suite? How do we want to represent sections? How do we want to tie all this together? It was very much here are the file formats Here's how we can quickly represent it in a database. Here's how we can script it to bring it all together Reverse engineering is sometimes necessary especially in the cases such as cruft report Torsten's the expert on cruft report domination and that sort of thing We tend to leave it to him because he spent enough time looking at the code to roughly understand where it will break Should he change it? The test coverage is getting better, but it's still poor There are probably about 20 test cases in DAC. They probably need to be about 2003,000 and this is one of the reasons we're quite cautious when packed patching it The only major DAC install is currently the Debian FTP master one That is the one that that matters in many respects We have a test machine called floater which we can give people access to this has a copy of project And you can play with your own branches and that sort of thing And we would very much welcome people who want to hack on DAC clean the code base And if anybody wants to write test harnesses for this Please see me immediately in fact stop the talk right now And I will be quite happy to come and talk to you about writing test harnesses for DAC There is going to be a buff on Thursday at 10 o'clock and I'm going to mention at least three times between now and the end of the talk We'd like to see anybody who has any Python and Postgres knowledge or none But who would be interested in coding as I said especially in tests when Zack said earlier on about test driven development. This really is an area where test Debian testing could Could do with improving Yes, we're happy of everybody turned up the important parts of DAC that maintainers will see all the Scripts that I've listed here Process upload is what takes your upload it runs linty and on it if it contains source It checks your key is in the GPG key ring. It checks that it's not out of date. It Runs a whole number of sanity checks these sanity checks are not derived from first principles These sanity checks are things that have broken in the past Again, as I said that has evolved when something broke the archive Somebody added three lines of code in the middle of process upload usually in a random place to check for that particular error This means that process upload is quite inefficient quite complex But seems to do its job 99% of the time Process policy is what the release team and the stable release managers use to shift packages around between the pu new queues and all that sort of thing They write accept and reject files that picks them up and performs the appropriate action Process new is what we use to do a similar thing for the new queue It has extra little features such as extracting the Debian copyright files showing us it in a pager or that sort of thing Highlighting in bright red where it depends Packages dependent on which doesn't exist all those sort of useful things so that we don't have to do the work manually The override command is pretty self-explanatory control suite. However is less so This is how Brittany sets testing it says the DAC I would like these packages at these versions for these architectures where architecture may be source in Testing please and it just feeds it a however many tens of thousands line Lined long list once or twice a day That looks at it and goes that seems to be about right and Sets the entire of testing testing isn't updated from our point of view testing is reset every single time this resulted in one particularly poor instance in where we received a zero line testing file and Testing suddenly had no RC bugs Dominate is What I described earlier on dominates used to actually be tied in with actually exporting all the file lists But we split it out. This deals with all the cases to do with Getting all versions to go away and more importantly these days the reason that we have multiple arch all packages in a given suite Is because Torsten massively improved the algorithm to hold architecture all packages around for longer Those of you've been around a few years will remember that if you used to be on any architecture other than I386 and Somebody uploaded a fairly large package You would suddenly find it became uninstallable on your architecture because the architecture all package for the old version Had suddenly gone away These days we do our very best to hold on to the architecture all packages that are needed by those older versions Keep them in the packages files. It makes the older versions still installable on Other architectures that weren't the initial upload target Generate package sources and generate releases these very generic package sources is the replacement for apt-ftp archive It is quite a lot quicker and Quite a lot more efficient because it used to be that we had all the information in the database But we thought it would be fun to scan all of the files again and jump dump it into a Berkeley DB cache using apt-ftp archive Before writing the packages file Eventually we got rounds to writing it out directly from the database and apart from a few minor breakages to do with field orders that Steve McIntyre can tell you about for the Debian CD software It now seems to pretty much work Generate releases generates the two types of release file you'll often find in there to do information to do with the origin information to do with What the suite is or that sort of thing and all the check sums The database is his for historical reasons called project B and there is a dd accessible live Postgres mirror kindly set up by dsa on Reese that is a live streaming replica So it it it will sometimes be out of sync with the actual pool that is on Reese By a little while I think we now sync it every unchecked run. I'm looking at yog to tell me whether I'm right or not So the files on disk should be just about in sync, but the database will always be the same as that on Frank So if you're wanting to do any work, that's the place to do it the install is now really a shell script that just runs the commands above in the right order most of the time and Moves everything through queue and checked deals with the release team in put sends all the emails and then our syncs everything out And this runs as we know for This runs four times a day at those times UTC unchecked is processed every 15 minutes except during the install So recently in duck we've improved the internal code to try and reduce duplication There's a lot more of that to do anybody who is a real Python coder who looks at duck might wonder what we were smoking when we wrote it. It is considerably better than it used to be Terson as I said did the improved all version handling and the removal and the removal of FTP archive was handled by various people within the team We also made the buildy queue generation considerably more flexible, which was useful because it allowed us to expose We didn't use to expose experimental As part of incoming incoming debbing org is now actually faked a buildy queue It doesn't really exist because we don't have that stage anymore It also means that incoming debbing org now actually satisfies all source code requirements because the original always be there Finally the buildy auto-signing integration was done with Phil Kern and the other buildy folks this This actually hadn't really come home to me until yesterday when I had to go back and process new four times in one day to get Linux 2.6 binary packages through because they were coming straight off the buildy straight into New because of the change in version number I hadn't realized quite how much of an impact it has to have not have the buildy people in the middle having to do the The tedious work of signing repeatedly it seems to have speeded up the site development cycle quite considerably The current work we're looking at XZ support for binary packages that is now written waiting It's no longer waiting for review because I did it in five minutes before this talk I just didn't want to merge it because I didn't want the first question in the talk to be why is the archives no longer working? So we're going to immediately after the talk We're going to stop the archive temporarily merge the patch and then run unchecked manually and hope that it still works We'd like to do something to do with multiple archive support This would be helpful for data debbie, you know But also it would help us consider merging some of the multiple DAC instances we have around the project security Backports all into one place. It would actually minimize a large number of problems as you have filed duplication We'd also like better queue tracking some of these are sort of internal team matters but A lot of the things where people say I uploaded this and this didn't quite work Or because we don't track the queues quite as well as we should do and it is possible that average tar gz For instance disappear while something's waiting in a queue at which point DAC will get very upset and refuse to move it to the pool We've been asked by and I and Zach finally sent an email a couple of months back saying yes do it to throw away binaries Um So the current plan is simply to throw away binaries that come In a changes file where there is also source present So therefore people would be if you upload a set of binaries on their own they will still go through fine So if you want to do a portrait upload or one of the corner case uploads that will work But by default we will if there's if you're doing it source plus Binary we'll keep the all binaries because that requires some build d support to work out how we build the all packages But anything which isn't any binary package We will just throw throw away to one side after doing the lintian checks and the buildies will build them from clean each time We've also had a bit of a chat about because of the build the auto signing It's now quite easy to recognize which binary packages were built by build these So it might be quite nice to have reports on the website about which package binary packages in the archive were built by non-build d Uploaders On-demand experimental type suites And the ppa infrastructure. Yes One suggestion has been and this came out before the ppa discussions actually that experimental is useful sometimes And in fact, we're about to make it more useful because we've had a request from the release team that people can migrate People be able to migrate packages from experimental into unstable without reuploading Um, I've implemented the code for that and actually again just before this talk I tested on the test archive and it actually worked second time And we're going to try rolling that out soon. If anyone's got suggestions about how that should be interfaced with I'd be glad to hear them. We're currently thinking a mail gateway or something send a gbg sign mail saying I'd like this to shift across The mechanism doesn't really matter, but Yeah, a decode type commands file be it via email something that's signed I'd like to move the following source packages at these versions from experiment and all their binaries from experimental into one stable Without having to go through the build d network again And this could be quite useful that you can imagine for staging transitions But this got us on the thinking that actually having a single suite where all that happens can be a bit difficult if there's multiple people using it So it might be an idea that you could create an experimental dash glib transition suite Do all of your work in there get the build these to build everything experimental then shift the whole lot into unstable in one go And that might help with transition complex transitions Immensely now that requires a little bit of work But if anybody's interested in working on that which seems to me to be related in some generic ways the ppa infrastructure Again, we'd be glad like to talk about that the boff I'd really like to see duck in debian again Mainly because I use duck at work to push out all of our local packages And at the moment i'm using a really old version and upgrading it is going to be very annoying So I would quite like duck to actually be in the archive and have to get installable and usable It used to be there, but it wasn't really usable Bug fixing there's always bugs Debug debbs and the debian smart upload server. I'm actually going to skip over because I'm going on a bit Come to the boff on thursday, we'd like to as I say we really like to see people there We'd like to have more contributors be it people who want to work on the ftp team or people who just want to work on Scripts, maybe not even in DAC things related to the archive to try and make things run more smoothly And I'll finish off by asking if there are any questions Good. I've sent everyone asleep When's the boff thursday at 10 o'clock neil. Thank you for asking that No, good excellent. Oh zaks gonna ask a question. Oh dear Actually, you did a very good job at destroying any single question I had with your last slide in the future work, but I'm still curious if you so Sometime we eat the problem that the duck machine is sort of a bottleneck So we have had that problem in the past. So I wonder if You have risen from an architectural point of view whether duck must be a single You know a single big lock on the archive or whether it can be distributed That's so in some way the archive is a big lock in some sense But I wonder whether it can be distributed in some way you say that the the duck lock It's if we're talking about locking in terms of you know Dealing with multiple things at once and people sitting around that actually doesn't occur very much anymore The install itself now is usually in the region of 20 minutes or so 25 minutes But since we've changed how we process things So it's not it's nowhere near as bad as it used to be where the install will be three hours long And nothing could happen in that time. We couldn't even process new during that time We've actually made the locks a lot more granular And we only take the locks for the short period we need to ensure database consistency One suggestion that has been raised in the past is actually some form of duck demon And the reason for that would be that it would know what it was currently trying to do And it would only take locks for those few seconds that it needed to make a consistent change to the archive If you know of cases where duck is being a bottleneck rather than Newers being a bottleneck or this is being a bottleneck. Let me know and we can certainly look at it And Colin's going to ask me a question. I think So in the past we've gone from Duck running or de-install sorry running every day down to twice a day four times a day do you have plans to Increase the frequency further this gets out. This gets asked this gets asked occasionally I technically we can do there are a couple of things that you've got to remember first of all the mirror Sledge is shaking his head and he's one of the people who always shakes his head whenever we say maybe we could go six times a day Would somebody give steve a microphone because I'd really like him to explain to me why he always shakes his head when this has raised I agree with him the thing is the mirror bandwidth is quite large because don't forget you regenerate the discs tree for testing unstable experimental Yeah, yeah, absolutely. Sorry Colin just for those of anybody who couldn't hear and we said they do that in a buntoon He was just sort of just asking About that steve. Sure. Yeah, the problem of course is exactly mirror bandwidth I mean i'm responsible for a couple of mirrors and it's painful. The other problem is then also The cd builds we don't want to be generating too many We're already halving so we do a set of dailies twice a day instead of every four times a day We could cope with more and again just artificially limit The next question is what do we gain from running any faster? That was what I was going to ask I mean, do people think that the the number of mirror pulses at the moment the six hours between them really affects them? Colin's got another comment. Yes. Thank you So, I mean, they're they're obviously a very different projects But we we did notice very clearly that that having a mirror pulse every hour as we do in a buntoon means that the development cycle tends to sink a little bit more to that and It makes it it means that, you know, you can you can try something out Upload it have it built it will almost certainly be Done and ready to upgrade from only about a quarter of the way into your work day And that kind of thing makes it possible to do a lot of things more quickly I mean one of the things we have to remember is that I could be wrong You're correct Now I think a buntoon has a much more control of their mirror network than we do we rely on volunteer mirrors considerably Don't we I don't I I don't know that that's true for that's true for our central mirrors, but Not perhaps for I'd be interested. I'd certainly be interested to see some we should we should yeah I think I'd be interested in some numbers and that ganif would know better than I would On how much of the sink is actually the dists tree and how much is the devs? I suspect We're at the point where with six hours they're becoming The influence of the dists tree is becoming quite high compared to that of the change in actual dev and source files And that seems to me to be about the point at which you start saying that's probably enough of a trade-off Yeah, we have something like multiple hundred megabytes in the distry for every mirror one right And we still could tell you exact numbers, but he's already bitching at us From from the snapshot point of view. That's true. The other thing is we have to remember that snapshot We wouldn't want to sort of upset them too badly because they would have to snapshot Or they wouldn't have to but they snapshot the distry every time we do it And obviously that imposes extra resources on them Certainly if we thought if we thought it was going to have a large effect on development It might be worth an experiment of even just trying at six times a day instead of four and seeing if it makes much difference And we do have a good number of mirrors that are taking actually a long time And are about finished when we are starting the next one already They will really love us if we go more There is one thing that has literally just sprung into my head and there's a very half formed idea And the fruit is going to come from here at this point to be thrown at me We have traditionally made incoming Not have a packages file Yeah, what we could do Would be to consider syncing that to one or two mirrors And generating a packages file even just using apt fdp archive there so you could have an additional This is what's coming in at this point that developers could use There are other work rounds rather than going through the main mirror network That could be used for people if they wanted to see that in the to help with development Let's discuss that outside That sounds ominous Does anyone else have any questions? Ah sledge Okay Fairly obvious question. I mean I am one I guess one of the Consumers of your output and I've we've been reporting bugs and whatever for the last few years Are there any more? um I guess interface changing changes coming That you can warn me about Define interface Well things like the format of a sort of the sources file or packages file that kind of thing Not that I'm aware of the big change was when we cut over to no longer using apt fdp archive Sure I really I realize this isn't much comfort, but I mean be giving this an rfca to two file We didn't we hope that all the parsers wouldn't care what the ordering was that was probably a little bit stupid in retrospect We tried to keep it as close as we could to the original, but we we made mistakes and I'll hold my hands up to that Um, so hopefully that's hopefully that's been changed The in terms of interfaces one thing we don't guarantee if I can just say is at the moment the project be schema We don't want to guarantee that we will keep that the same. The reason is it's bloody horrific And we would really quite like to clean it up considerably. I think at that once we clean it up We should look at possibly providing some stability Sorry gannaf. Oh, sorry. I'll join me to repeat that content Or I We are also thinking about changes to the contents file But probably they are done by adding new files and slowly removing the old ones Getting them in some more usable format and also adding source contents and stuff But I think that is a case where we would keep generating the old files But produce new files with different names and then gradually hopefully after over a couple of release cycles cycle out the old file file types Hello I wonder if you have considered about carrying part to a partial architectures And we see like architectures going out and in and are you andy both dressed up because he just grabbed me about that immediately before the talk Partial architectures. Yes, Andy's asked to talk to me about that I see the release team staring at me about that because britney would have to cope with it There are and I'm hoping cons about to tell me there's no need for it Oh, all right. Okay partial architectures are things like in the past examples have been ppc 64 spark 64 I've never been entirely convinced it was worth not just building those as full architectures But andy did come up with an example that might be worthwhile beforehand, but I've forgotten what it was So the idea being you could build a limited set of Packages because you don't need the whole of the archive Optimized that particular way I've been told five minutes. So I'll probably just take cons and then finish Can I just say one other thing about that? I think that is going to become an issue Especially with now multi-arch becoming production ready Things like i3 at six or i6 at six or i3 at six mmx or whatever Where we might want certain libraries to be available in optimized forms that might be the way we end up doing that It's also used or can be used for getting With of the current spark port and getting spark 64 for example Transition without having a completely new architecture added and removed the old one. Yeah But certainly yes partial architectures. We've considered Dac actually cares very little about the completeness of the of the Component in fact when I say very little I mean not at all Brittany compares about the kid. Brittany cares very much about the set closure over main, but we don't Colin we finish with that. So you may not have a great deal of sympathy for this the Sledge asking about Interface changes reminded me and in conjunction with your your comment about support for debt at rxz So I'm One of the per sods running a downstream distribution that Occasionally needs to keep up with new facilities the debian introduces We're about two weeks behind you. I think on debt at rxz Um Is there is there anywhere? Is something that resembles a change log of New interface features that dac supports relative to No, that's not particularly helpful because of the noisiness I this has bothered me for some time and I'm wondering if we should Coordinate slightly better with the derivatives on that and also as a project as a whole because Data at rxz has been around for a while now But there are bits missing in the stable tool chain that Are having to be hacked had to be hacked around or introduced in the point release So that we could use it so that the features could be used I wonder if this is actually not just for the derivatives, but for debian as a whole something We should look at coordinating slightly better right debian as a whole and I honestly don't know what derivatives other than a boon to do I know that many of them use dak in some way I don't know whether they're keeping up with your get tree or whether they're doing all their own stuff I'm sure we can't be the only people with this Yeah, I think it might it's another example of where cross distribution, but also in internally within debian It would be an example that was brought to my attention the other day is that the bootstrap I believe will only cope with Data tar gz So we have to be careful not to end up with data tar bz to our data tar xz in Packages that bootstrap needs to deal with or or we need to change that assumption Absolutely, but we need to put again. That's an example possibly that we've missed. We haven't coordinated properly Gannett wants to say something and I think I've got to finish. No, he's finished. Oh sledge is back again Yeah, I'll finish a couple of last things In terms of mirroring, and there's a really silly question. I could check anyway Are you generating packages dot gz and whatever as are sinkable? Yes. Okay. Lovely and finally Okay, cool and Are you ready for us to start doing armhf that like very soon within the next month? I sent I sent an email about armhf when we met Yeah, six months and I'll say three three four months ago. Yes As soon as you're ready to bootstrap it Give me a shout We'll add the architecture in unstable dumping initials and go through the usual architecture bootstrapping procedure Which is even written down somewhere. Just don't ask me where it's written down Awesome. Thanks That's it I think. Thank you