 All right then. So Anthony, AJ, I think he mastered that guy, whatever. And the idea of this box is not too much lecturing, but just more of a, well, ideally to get people said that they can say, oh, I want this stack feature and I have some vague idea how to implement it and get it started and send it back to you and make it one that actually has a chance of being accepted. So the first thing I thought we might do is just see what sort of features that people are actually most interested in adding to DAC or having DAC be able to do whether that be separate archives to FTP and RASTA that do similar things or extensions to FTP and RASTA to do supported new suites or other sorts of things. Everyone here is familiar with what DAC is. Is anyone here not a developer? Not a DD? Yes! So I imagine you're probably all aware that you can use DACLS and a couple of similar commands on Merkle if you're a DD. And I presume everyone here is uploaded a package. What's that? I'm not going to explain it to you. Okay. And are there any basic questions that we should get out of the way right at the start about how DAC works, how queues work, what the general structure is? So what features do people want? Automatically, it's a lot of CPAPs. Debbie just started out being a buy-in. So there's actually code for that that Colin Watson, I think, wrote for Apple II, which isn't actually active. Debbie? I mean, we have a lovely day start. Every time I leave the room, my chair's taken, which is... You didn't get a chair last time. Well, I want you on one chair. Well, I prefer to be on the 7th. Too bad, because I was an occupant chair and just came to the table. We know where to get them, Steve. They're just about to install them, I'm sure. Yeah, they're being installed on a thing of my hand. So the last thing is, we have the issue that if a package goes to a new, it won't come back to a PUNew afterwards. Right. And there's a similar problem with the security updates. Yeah, I mean, all this queue sorting. That's called queue sorting somehow. Let's not call it quicksort. Do you have to install them? What do you just think you want? Do you have to... Do you have to bug somebody to process it after you're done? Yeah, install the code. I've been working on that because it's not pretty. Well, not you anymore, but you know what I mean. Somebody has to borrow somebody. There's also a live or a simple patch somewhere which could go in, which is doing... Getting packages into new or some other queue that needs to wait if it loses packages and if it loses symbols where stuff keeps going. Basically, if the live or symbols change, it needs input testing. Unfortunately, that's the back level. What's the patch? Yes, there is a patch market and I did it. But it's not published yet, it's only on this laptop also. No, market is too... It's not on my laptop, I know that it's on my own machine, but I can't do it. It's very good. This is getting less and less published. Okay, those are the types. I'm not going to follow this as a bug report, right? That's a reasonable place to keep every architecture until I get in the machine. So, just... I don't know if we want to sidetrack any of this too much, but that particular patch includes some method for developers to override DAX check. There is some method for a developer to deliver the override DAX check if they want to. If they say, yes, these symbols changed, but it's because upstream didn't export them as part of the API, so let it go anyway. There is an option to override that. No, that can be added. I would like to see that in place before it gets deployed. I would like to see it in place before it gets deployed. Check the current version. The last version of the patch I've seen has just looked at the old and the new stuff. Okay. Some decided based on that. What does it consider the old stuff? Does it just look at the dead in the archive, or is the patient somewhere? In the archive. In the archive, what we did. And it's well only connected for unstable blocks? It should be. It doesn't just put the package into new, but someone can decide what to do. So I think we could just activate it for all suites, why not? So is there also a patch for examine package, which will report the information, like the explanation of why it's new? We had a patch for the new QC, that short what is going on. Another patch that I would like to see to duck-apply is this mail-write list patch. What? Mail-write list. Mail is only sent to people in a write list. Every archive except the Demian one that once duck has some mail-write list patch. So if you upload the package, that's mail-hand by you also. You don't get a mail about it. So you can't spend... Is that a configuration option? So if the configuration option is not set, it will just behave like now. It would just be nice to have that patch in the original dark archive. Is that a default like in main port of the Demian AMD64 archive? Sorry, we don't want to spend every mention of it. We're not saying that it should be in the archive. We're just saying the patch should be in the dark archive. The patch should be like... But it shouldn't be activated, but patch should be there. This new symbol is already a tool at this moment. Today you can supply two dot-apps, which is like installable by developers, which will report on new symbols. I think that would be very useful for people that before they can upload, they can actually check them. That would be a very small script. It could be in that script, for example. I still think it should be part of DH mix-works. I think you should be allowed to build a dot-dem that has a broken AVI. We'll upload it. Well, that would be nice. Actually, we have currently these proposals for smaller dependencies, which would include changes to the AVI. So I think we have a good path to that. You don't necessarily know what the old list of symbols was at big time. Sorry, I didn't... You don't necessarily know the list of symbols from the previous package at big time. My notion is that it would be a manifest that's included as part of the Debian metadata in the package. And so there's discrepancy with this manifest that... This is so off-topic with this bottle. Let's talk about this later. How about patients then? Do you have any examples of libraries with a new username faster and not necessarily new? I'm happy with that. I got some new. If we automatically also inject empty packages... I think this has only changes there often. We named the package as a control file, but forget all the other files. We don't have files in the PIMS. But the check into packages will be a good option. So how comes the change-up file? Related, but perhaps slightly simpler, is the idea of automatically going through the slightly different kernel update. So we've got the Dash 5-6 API change, which I think Ron Treyes was going to suggest in his Wacky ideas book. So I guess the question of that would be working out how to set a policy for what packages are similar enough and under what circumstances they're similar enough to override something like that. We don't really know how to feel for what that would be. Maybe an expression. Maybe an expression like... First of all, again, for shared libraries in general, I mean, you're changing the number at the end, but you also want to check that the library mostly stays the same otherwise. One interim solution might be of modifying the actually sorting that's being used for new processing. Like sort the source package of the same name already exists above all the others. I actually submitted a page for that. That would mean that if I have a lot of packages, a new way, I go through all of them and just accept those that have name changes or a new username or something. It doesn't matter if it's sorted above or not. How about... I don't know what you're trying to prevent here, but if you're trying to prevent unexpected name changes, you could only accept the name change if in the change log it says change the API or like put the magic phrase in the change log which will pass it through new, which will prevent the maintainer unexpected rebuilding a library so name change and have it accepted. Those will be locked, but those that maintain actually close and want some name change will go ahead and close. So it just needs to be done by, but proposed anyway is because you don't get another by the package just by the building with a new surname. And it's too fragile for a huge, I think. Keying on contents of change logs isn't something like that. Those are bad cases. And you're talking about changing the package name, but I think it's much more common to change the surname without changing the package name. Yes, and that should be automatically changed. Yeah, but as an addendum, can we reject packages that change their package names for no reason? Don't throw those anew, just throw them out. How about it be a reject? Every single year. One feature I talked about in the release team, meaning that I didn't receive much feedback on is the possibility for the release team to block certain packages from being uploaded to unstable because we can, for example, increase time, leverage the time as it lives or during a big transition going to testing some library breaking it or any package breaking it, so block them. Yeah, you were saying that implementing a library freeze for plenty. You want to do a freeze of all libraries and you said that should be implemented at the DAC level. Have you talked to them about that? No, we didn't. That's two different questions, however. In theory, for the testing, for the release stuff, the release managers in theory only need to touch testing. I think we're pretty much limited to that so far. You don't actually have any control over what goes in on stable. You can suggest people not to upload. But you only actually have, well, when you have a fairly complete control over what goes into testing. So blocking uploads to unstable would be something fairly new, which I would consider just a power grab if it only came from the release manager team. Actually people say that, yeah, that's a sensible sort of thing to do because unstable is kind of the playground where people do get to upload whatever they want and screw stuff up and whatever. I can imagine because I follow those discussions already fairly regularly and sometimes you see, oh my god, this will just upload and then so we're set back a week again. We were almost there and I think it makes sense to say, okay, let's freeze on stable now for two days so we can finalize the last two things and push this huge transition through without being bothered by unexpected stuff. But it is a... Sorry, I didn't know it's here. It is their whole strategy. It's an argument to be made for just use experimental for things like that in the last month or two. Sorry for doing that. You could make an argument that you should really just be using experimental for that sort of thing a few months before the release. The question is, are the maintainers that are doing the upload paying attention to what people are telling them about what should or shouldn't be uploaded? If they're not paying attention and they get blocked, then suddenly they start paying attention and can get uploaded as experimental and prove it works and then, okay, we'll unlock that, I'm sorry. In case when there are such blocks, it should always contain, please use experimental for now and remove the blocks in a few days or so. So, are we only talking at the very end of the phrase or are we talking for little transitions during the... We have two different topics. The one is for those big transitions where we just say, oh, please, two days, none of these packages, please, please, please. That's the one thing. That's basically the big tool chain transitions. And so, what do you mean by these packages, ones that depend on others? One is that they're tied to a significant library tree. Usually the cases are... Well, how do you want to specify them as much as they do? I would say we would extract it based on Brittany and... Oh, whatever. Yeah. And if you would... Okay. That's a good list. And what's the second part? The other thing is, but I'm not so sure if it's really the blocks here. We said that at the end of the list cycle, we would like or we are going to make a library freeze. Even if you say you do a library freeze, I think it should just... At least my current idea is to just do it in the freeze file. And if a maintain uploads this own name, I hope to be informed by the FTP team in time and otherwise, two bets on a maintainer. Well, if they don't need to update the shields, that's not going to go into testing with this problem. But it's probably... This own name could even be worse. But I think it would be... I currently expect it's enough to say when we freeze it because even though maintainers usually don't... Even though maintainers who don't see the development now usually notice if the package is frozen somehow. Yes, as Steve knows, it's not all that. I know some special exceptions of that. So I don't think that as of now I would start... I wouldn't start asking for some duck changes for that. But it might happen that we notice, oh, this and that package... It would be better to not check it in duck where it is at least also. But actually it's also a sign that this maintainer perhaps would need to go to... should better go to an M again. Yeah, basically it is, but... I remember one new discussion we had for H about one of the GTK, LIPS ORDERING to be or something like that, which they threatened to upload for a long time and we thought it was going to break a lot of stuff. They all run together at the same time. Sorry, so can someone read out the ILC when this stuff is saved? It's just an ILC. This trailer is just about the page for the library symbol stuff. Right. And... Oh, okay. Excellent. And I think some of the points just raised are very familiar from the discussion about the introduction of testing. So... the possibility to block certain updates from happening, so it's basically shifting a part of testing into unstable and telling people to use experimental as the new unstable. That? No. This is a question here in ILC tools that's similar to making experimental full speed and not only an extension to unstable. I have other suites where I consider it more important. So if we start with full speeds, I would like first to consider where I really could be used for something like a post update to make it a full speed. I consider that far more experimental because then we could even do some removals before we do support release. But even that I don't think has a priority. I don't think that's a challenge either, so... What I'm thinking about is whether it's possible to solve the staging areas in experimental where you can upload something that weighs some ABI or something like that to experimental. Okay, so staging areas is a different thing. We could just finish up with the blocking of stuff first. Okay. So that's a technical difference. Sure. Who is offended at the idea of having their uploads to unstable blocked by the release managers? Sure. And of course it doesn't... I suppose the least of the lines is it's a little nobody would say anything. Sorry. What's the special exemption and then they'll quiet him up and then the rest of you, okay? Once you're not face-to-face and you've just got an email in front of you who will then be offended. For some reason some IRC makes come to my mind. Yes. I know a few people who will be offended about it. I don't think they're going to mind. But presumably we want to keep the limits that they're adding to three days, not more than a week or so. Yeah. Just for very, very short times. Okay, yeah. For finishing touches of other things. Except for the pre-release one that could last longer in theory. For four livers. Right, but that's fine to be advisory rather than necessarily... It's planned to be a start-up advisory. But of course if you create such a realism it would be nice if we could use this realism with agreement of FTP masters to say, well, we want now this and that line is definitely blocked because we are, for example I actually remember Lippie and she last time. It ended good because it's supposed to maintain and what she should upload and did exactly what we said. But if that maintainer would have decided to well, make a fuss about it it might have been a case where I would have liked such a check. Yeah, what are you? We're talking conflicts between developers. That's not... That's really something that has to be handled on the case by case. Yes, of course. But on the case by case decision. Nothing else. One thing I will mention as far as and this doesn't detract from the benefits of actually doing this but I will point out that there have been cases where we were doing a transition and we were trying to get all the packages together and even if we identify all the packages that we think are involved in the transition and if we put blocks on new sourceful uploads of those packages so it can build some of the missing builds may get caught because some other library it depends on gets re-uploaded with a new SO name and approved through new in that stage or new slides at that stage and we've been blindsided by that before and it's very hard to... It's very hard to... You cannot prevent everything. Yeah, I mean it is possible to go okay, these are all the things we're missing this is what the libraries it builds against are and tell all the maintainers not to upload or freeze all those libraries too but at that point it's just like it gets to be a little bit much. Well, it's the same with the I. You're close to releasing then suddenly for no reason at all some package. I mean it's difficult. You don't think about these things you're never expecting people to upload these libraries. That's a hard problem. So, another thing is everyone knows that if DP Master happens oh pulses happen every 12 hours now Okay. But... Why? I hoped it was videotaped. Well, as long as it wasn't basing towards the audience. So, one of the other things that Joey wants to have happen is to have that happen more often. I have one idea for that. I'm not too sure if that is a good idea but we could perhaps discuss about it to make the... to these those ones itself very often something like whatever, 3 hours but don't do so many mirror pushes because the day is 2 mirror pushes per day. Like we have now and have something like when you have some archers that just contains also new upload stuff for 36 hours but having that also mirror so that DP Master is hit hard. I must say that I see in practice only very limited benefits of the multiple runs. Mostly because they are only useful for the architecture that gets uploaded actually and not for the architects that get auto built because the built the administrators have the habit of just doing signing once a day or something like that and they are not about to change that. They are not going to be signing every 2 hours because that would just break their other work during the daytime. And with people starting to spread over especially in the 64 and I386 uploads more and more I doubt that you are going to get any real benefit from more frequent and through the 2 times a day that we have now. 2 times a day is nice 2 times a day is nice but especially I would be very much against additional mirror pushes because mirror pushes do break things while they are happening. If you do more I think we should stick the number of mirror pushes till tip 2 and just have the new packages some are valid for people who can't wait for another 2 hours. Yeah, they are valid. But there are 3 packages. Selective and pushes could be an option But we are really talking packages file for incoming and upcoming developers. A lot of people are then going to actually put it in their sources so they say they are going to kill whatever you can send you 2 mirror packages if they meet up there and go over session coming instead of forwarding. You just need to push it somewhere. Yes, send it there. Incoming is not allowed but mirror pushes over 50 minutes or whatever. Does it make sense to increase the frequency only during specific periods of time like during DEF CON? You shouldn't have to I think the top problem is still the same that you don't get built up frequently enough Just because people create things in DEF CON and they want other people to send them the next day even if they have the same architecture that could get under they could give them the DEF of course I think we should think in the future that we should have a local archive for people to upload to a deck that's running here so they can do stuff collaborate and stuff too I think as you always remain argument has been I want to get fixes in quicker Once we support something that's broken in the installer or something, he just wants to be able to get fixes loaded and have that available for the next test runs or whatever. How long has Brittany been running these days with the image magic slowing everything down It's been taking several hours three to four hours I think Brittany takes about three or four hours right now you know took a little while it's taking several hours right now because of the usual that kind of puts a limit on how often we can try to cycle things if Brittany has to be part of that It used to be very faster by the way It's because something is uninstallable and far too complicated It could be that's the moment Image magic is one of them No but It's still only running for one hour It's running till 11.52 It's running for one hour and 50 So okay It's running for two hours now It's ten minutes before I was very quick before Two hours now Yeah People were not learning anything Okay now Brittany's two hours so I'll give you an example We couldn't be make more hours for hours Maximum technically I thought I would say four and I would be ten So you have your brother Rich from IRC Oh yeah You have your FTP master's book Keeps a Bazaar with Pository Updated It's really heavy Yeah it's a nice dog You can run that You could outsource it You still need a description Yeah Yeah Committing stuff to the repo that they've deployed without giving a change log or anything for it You can just find the change log Adjusting stuff with FTP master Catching up with FTP master Something like that Yeah the local repository is after date but still not checked in Features You know I have one Because I need to get the tags in the package file without possibly allowing FTP master so much I will be able to generate an update a week of the tag override So I think before you were here Someone was friends or Andy was mentioning automatic by hand process and they've been installing automatic by hand process and get tags That would be great But isn't that text a bit different from DI Yeah it's still by hand Yeah it's still by hand But you need scripts for every type of by hand, several scripts So You need a way to identify which one should be run and I guess that should be You can be that on hand to check Yeah You just make something like Well then the tags are by hand By hand slash something So you don't have in the DIs and the changes are all installed instead of by hand And there's several more you also have A few of the documentation packages I think have by hand Yeah it starts to talk again It's developer's reference No it's developer's policy I think Yeah The release notes don't at the moment No We take the table We don't publish them online But perhaps if the auto processing works really well we could perhaps go back to making all the towers as part of the building Just as an idea I don't know It could happen but You have to have two things You have to have a need to publish them as well I think there are a lot of packages that could gain from having such things auto processing The thing with release notes of course is that it's not uploaded as a package at all So for the release notes if you would want to process it that way you would need a special package type that can be accepted without an actual package So several of the works are by hand That's by hand Purely by hand You can do it You can upload by hand just anything So I have one of the table Yes It just hasn't been processed by hand yet So what release notes isn't in the distribution at all No It's only published on the website and included on the CD Based on the website build But it might be nice to publish the table or just put it on the table when they are done Yes The reason it's not a package is basically because it's mostly prepared too late and you want to keep it up to date It's no use to have a stale release note package in the archive People should go to the website basically So staging areas Do anyone actually look at Robert Collins' new talk? Pardon? Anyone? Robert Collins did a talk on release early release often There's always the next one for you which was mostly about getting staging areas so that you created the staging area to do all the work on getting the new condone version done there so that it didn't impact with separate work on PDE or PostgresQL or MySQL or whatever else The other one gets ready first gets pushed into unstable and then all the others need to re-sync with unstable So if suddenly KDE has some packages that have some copy libraries coming with the new condone then they need to get updated otherwise they're obviously not going to have the matching dependencies for unstable which the aim of which is to kind of desynchronize transitions so that they can go into testing more smoothly That's the general idea of having Everyone know what I mean when I say sweets and components? Does anyone not? I think they know but it's actually wrong I don't know if that's correct So what's a sweet? Can someone give me an example of a sweet but it's not him or him Ah! I didn't mean that No, don't make garbage or we will laugh at them No! Oh come on, anyone because he's got a way to enable questions So sweets are the collection of packages in a sweet you can only have one version of any package Ah, that's something As I wanted to change by the way I really don't think that's going to change and that offends my mathematical sensitivity Components are mainly contributed on free so just different ways of separating this sweet which we always did by license So having staging areas means you want different versions of perhaps the same package compared to one stable and compared to different staging areas so if you're updating some application to work with that you can own in one staging area or maybe you might want to update it separately to work with your press press world in a different staging area So that basically means dynamic creation of sweets Creation of sweets is currently a completely manual thing so we edit that pump by hand edit the press press world database by hand hope that they all make all make sense try and remember to commit them to the puzzle repository and make sure you create the right directories on the archive and then you've got your own so that doesn't happen but it's kind of what's required for the general idea of staging areas If you make it less generic it could work without so much effort Less generic If you could upload your schedule bin in a user and set it So that works as long as you don't actually have two different transitions for the same package which you often do No, as long as you say we can build everything again in unstable if it works you just polish a package as an experimental then build it in unstable If that's enough it works easily, it's not enough that it's hard it's a bit more extended he basically wants to have different development teams be able to work separate without being bothered by the others So you would upload to a specific staging area And this is specifically for things that aren't really ready to be released yet So the 2.7 sort of good own stuff for the zero The other thing it lets you do is if you do have what's the term for these things staging areas if you've got a separate packages file for each of these staging areas you can set it in your app sources list and pull it automatically from it if you're working on that particular task without having to worry about whatever else is in experimental looting The difference in the staging area the only sources that are different are those for the packages that you actually want to build against like the new GNOME or something like that And you also want to have binary packages for everything that depends on it but you don't want another version of the source But just something as I've said currently they can't really do to have one source with multiple non-linear versions Yes you can What do you mean? I think you need to have different tweets for them but you can have actually it's a problem is that they are not no longer necessarily what new as you currently have mentioned to each relation doesn't really mean anything anymore because which package is new as a one that's for the GNOME or someone for the KDA No, it doesn't matter Yes, but what version are you using for the binaries? You can have one transition at a time Yes, essentially each staging area builds all dependent packages for the transition it is responsible for and then you say I want to merge this staging area then you move all the binaries from this area and all the other staging areas have to adapt to that But if you have packages in multiple staging areas at one time because it needs to be rebuilt against two different libraries that are being staged each of those binary packages would need to have a unique binary version number because the archive doesn't allow you to have two packages with the same final name basically So probably the staging area name would have to be in the standard for packages that are just built for that reason and don't have a source because that's what you're talking about Yes, but if you start with complex now it's actually not Now we need naming schemes that are not good We need to just paint our binary naming schema a bit for it because plus B1 plus B2 isn't enough anymore We actually would want for the experimental staging areas to have binary version numbers that sort earlier than the plus B1 Because we would want to be able to use the regular binary the regular bin and viewing when that staging area finally goes to... You could do it something like plus B1 to reduce the higher plus B number than the more standard staging areas Oh well they were talking about somebody said that the staging area name would be part of the binary revision number It would sound cool because it's best if I should not mix it up I mean you could just... You could make plus B something in houses can I get to call it in English of my mind less than anything so we just take the next B number in my SX lesser and then whatever it is Would the building scope with it? No The buildings don't do anything with experimental today anyway So the official ones would be inexperimental Okay and so not built for many architectures but from that there would need to be separate building stuff done particularly because you need to make sure that you're getting some of the libraries from unstable and some of the libraries from the staging area Yeah and you basically have separate one build from the staging area I wonder what it would for each staging area That's not too much You can do the less lived You don't have to do the whole archive so it's you want to be able to apply special hex and you're staging But I see we need a lot of momentum No but at the moment you would because you need extra momentum to do all the build these all the one build databases and the archive because there's no scripts for any of it so you need to go through all of those at the new staging area delete any staging areas that are not necessarily and make sure it's all correct by hand which is why we don't do it Basically if you do such a concept the concept would in my opinion include some way to outigrate the one build suites because otherwise you don't do it otherwise and it's not so hard to date the one build suite if you know what to do by the way So this is not going to happen tomorrow Basically Yeah It's a very cool idea and it would make experimental a lot more usable Yeah and there have been various ideas that were tossed around particularly on IRC about how you go about moving these things into unstable once you're done staging Yeah I mean if you upload a new source if you move the source across and rebuild or if you just move the source and binaries which I think moving is really not what you want to do I think you want to do a clean rebuild against unstable Just to prevent any accidental mistakes Well you may have other transitions that have already started I mean reverting a transition that's in place that's in progress and you just if you're moving stuff around it's better to do this rebuild against whatever is in unstable Yeah but one advantage of such staging is two advantages of such staging One is that you can just say whether the source code changes are beneath or that we know it works that you can most do even on base of the current experimental because it just the group packages correctly you could probably do it without few extra indices The other thing would be to say when we do a complete staging and move the stuff over instead it's ready for testing transition which would be a cool thing It would be really cool if it works Like I said though you run into the problem of having to manage if you are doing this with multiple staging areas running at the same time you have to you have to rebuild the whole staging area anyway before you can push it unstable if the two are related If this one goes in first this one you can't just push it unstable because you're going to revert a library transition that this one already committed does Actually I think in a complex version there are a lot of things to be sorted out I fully agree that there are a lot of things to be sorted out but still I think it would be great if one can keep the source as if it's a binary because it would reduce around about time it needs to be in unstable otherwise if you rebuild everything you can just upload some source and send it over there The alternative approach would be to do it in the staging area and then just before syncing it within the staging area and then rebuild But that means you have to have a really solid build the network for the staging areas Right What's your upload target that you listed out in those cases Is anything going to complain about that if you take more Everything would be because the stuff doesn't exist but it's no worse than uploading stuff to unstable and having it here at desktop and stable And you can Yeah, it's a lot of fun I think it's a lot of tool as today with upload targets It is like experimental for a global package in which a team of people cooperate wouldn't be able to start to become really essential because the young people some of them are on IFD, someone in the 64 Now no one uses experimental it's not an issue that this package doesn't exist on AFDC because I wouldn't want to use it anyway But if a group of people actually has to cooperate, there should actually be some a good build to the network support You could say that staging areas only covers a subset of the architectures though those that could go download ARM with staging areas I mean you might have some staging areas support ARM because they're embedded stuff that they really care about and I was not seeing those except if you really want to move stuff from experimental straight into unstable without rebuilding If you want to protect it, you will need full building Although that is trouble to set the upload target You need to change that I'm not really in concern If you do such a proposal That's just when you use the same direction in the procedure of uploading staging synchronizing between different developers and once you are done you know that everything is fine You need to push this package to unstable You have a procedure The last step is to take out the package of experimental and put it in stable but just before you build for all architectures check more before going from the staging area and I really agree with him because I think that for example we can have N64 and PPC for example because developers just are working on this you just define a subset and once you are done you just add some ID to ask before we build the staging area which is really to check that everything is fine and in particular that is based on the whole archival program and it's just a way of defining the solution So basically what you are saying is you should have the option of saying please activate these architectures for this staging area at some point and then before you push it into a stable you say ok now is the point that we activate all architectures and maybe even use a different set of build demons for that final presentation stage Next to that I was thinking if it could be used with interface with Lukas Nusbaum huge network of things and parallel for staging areas you could tell you if this staging area would be merged into a stable today this would be the outcome as a parallel thing you could have reality checks for staging area Well actually we did say already we had a completely new PLG change last one we had a new version of the PLG used for test building all the build dependencies right that you could do that more dramatic, it was a bit helpful feature to show before the release to unbreak stuff the big problem is if you have multiple transitions going on and you want to merge one of the transitions back into the unstable you don't want to rebuild all of it at this point because you want to have this merging period shown as possible yes you needed to rebuild it before and basically in the moment you can't merge anything else in unstable anymore unless you decide to make you have to wait for the transitions in a sequence at least merging you need to make the merging in a sequence so who is going to handle that is that going to be a release management job I feel so I was going to say how does this interact with testing regression you move one station area into stable and then you wait until lightning rays testing for emerging the next station area or I think it would still need 10 days in unstable normally because you're going to get a much broader testing base you may well know if you've got a staging area that says this is the you can own you're not going to have the user for all architectures you're much more likely to have a larger user base for all architectures talking about the fact that we need to choose which staging area will be merging with unstable and in which order maybe in fact you can just submit the proposition to manage your staging area once all is revealed that's okay you know that you're ready to do it but in fact you give you 10 days for example and 10 days and everyone can submit the same staging area merge and proposition and you just choose you you compute in fact the dependency between the different staging area and which one will impact all the other it's not so complicated I think because for example if you've got a staging area with something like objective command it doesn't impact a lot of things it's really small but with Piton for example you've got something much more big if we and Piton submit a matter staging area into unstable I think Piton will go before obviously camera and in this way we can choose the correct order in order to make the less effort for every people working together in every different staging area to manage things I think there's really need to automate this because you can like apply some common sense and some life-saving ass definitely in parallel I think there's step two then step two is automatic merging then I'd say we'll probably discuss this we're going to need a lot of implementation stuff I have another topic I would like to see in the back basically what happens today if someone uploads an architecture all package it just appears in all architectures of course and if you have mixed between architecture all and ending packages it's all packages but the ending package only after the build has been done and that breaks as the architectures the idea would be but the idea would be to keep all packages not yet build suites with the old version so that does make sense and it's been on the duct to be less effort God knows how long but I still would like to see it the problem with doing it is getting the behavior correct so that I can't even remember it's been that long getting the behavior correct so that it will update the old package when the ending package is built and actually that's easy it's just that it needs dependency calculation of some description of DAQ which it isn't there at all at the moment the challenge is if you've got an architecture all package that depends on an architecture specific architecture package so it isn't available on say then in that case you kind of would like the architectural package not to appear on at all and that makes sense to anyone okay good, excellent getting that to work right is a lot more difficult if I forget for a second but actually one of the three conditions for that was that we don't have an alibis anymore for the source package mostly the source version maybe okay if you need dependency calculations in DAQ doesn't make sense whether it's the French advanced technology on dependency calculations the what? the very advanced technology on dependency calculations that the French have they do all sorts of things they into like a separate tool that you can call or if it's something you need to have control it's all very important otherwise it's very fast if you need to have something that you can control as in source code and therefore should be written in a language that's known also as how the French but that's actually it's a Edo stuff you just discussed about an IRC that's quite fast and the problem might have been if you're updating an actual package that's dependent on other architecture that's not on the one you're looking at but it might be after it's built then how do you know whether it's updated or not I don't think that you really need to make a dependency check probably even enough if you just say well if there are any packages that are older you just use the old package as long as that's the case I had worked on patches once before but I had problems to change when it's a carry how is it called these days just to package the tires the old versions and unstable that's for the one where I had some issues with but probably not too hard but you need to change that comment by the lot to achieve it this is the problem the single source package will architecture any and architecture all packages this doesn't solve the problem where a certain architecture all package depends on the architecture any package from another source package I meant way less for that case because it's a way less common case the question would it make sense to have a sort of dependency declared in the source package that says I mean at the point you are about that you try to get unstable speed to clean I think it's ok if it breaks in time just if it makes it less often we should take the base but it will still break sometimes so it's unstable the problem will appear more and more as people switch to md64 packages because currently only people using rbc see the problem appear now that md64 is more and more common with 86 people will notice the problem and start to complain I have already started it's kind of the basic idea behind unstable that if you are using unstable you are able to do that to handle such temporary breakage and I must say I've had no problems at all dealing with it using aptitude you just put the package on hold for a while it's too much gets removed it does it automatically in this case but it should use aptitude you have one part of the infrastructure which is the archive depending on another part of the infrastructure which is working properly and we could avoid this dependency by hacking a bit of that you always pay discipline to miss a work but it's not work because they will be crushed in a few days I'm not so sure of course it can be improved and it would be nice if it could be improved but if the cost is a huge increase in complexity of your archive maintenance tools then you have to wonder if it works should I say something I don't agree with you because well maybe I'm a bit of an update but when I was reading the data I think that you want to have a clean and stable just as you people when you are working with so for example if you hold the package and the people in the same team as you two package built on different libraries it's a problem if we don't have as stable as things possible it's not as stable as possible it's as possible for that a whole team can cooperate on the base of unstable if we break enough unstable so that people will choose to hold the package don't upgrade just before building my own package I must I must upgrade you're okay I must update I try to stay upgraded as well as much as possible so I only hold things that are actually broken how do you know that the package you keep on overrun doesn't want to break I do that because I do things you see but because there is time for this I agree that there is a good step toward building things as soon as possible but basically what you're suggesting is to rebuild a source for all packages I won't understand this but it's no I don't want to the basic question is is unstable meant to be a tool for a team cooperation I'm not entirely convinced that's the case it's meant to be the point where the entire project operates sometimes teams will need a point that's a little bit more private so they can do more damage or whatever also my idea is maybe inside the team if you're really independent having the latest and greatest of everything and you basically want to have a repository just for your team that also has shorter upgrade cycles and so on because with the current archive pool cycles at 12 hours you still have the issue coordinating a team I think the other thing is to get back to the original topic how is this solved for stable at the moment I can see that we do not want to solve it for unstable because it's simply not worth the effort for stable it's all first of all by testing not letting the broken packages getting to testing and for stable by manually not allowing packages in until they've been built from the architectures and basically not letting dependency changes that would break other packages in if I have a binary all package depending on a binary any package it's not good for a certain architecture then it doesn't get accepted into the archive until it is or if it comes in on IPX IPX binary we don't actually quite good like think of obscuring the packages if the dependency check which is the most simple and it's somewhat hard to implement but if someone wants to write because they think it's really easy go for it but I actually you should say that's a code so before it's committed also needs to be a source that you create could take or maintain it and it's also a new code so finally by now isn't enough what packages do you like? one thought about staging areas because I was thinking like GNOME would affect 2,000 packages KDE maybe a python staging area 5,000 a GCC staging area the whole archive so how bold does it go? depends on how much this space and how fast we can do the builds and how good the infrastructure is I mean at the moment zero the idea would be to go as bold as staging area for a new version of the toolchain which means the entire okay so another thing which I don't know if there's anyone else but I quite like is the idea of some accept time checks that pull out to a PU parts machine or to maybe the grid thing that does rebuild testing or whatever sort of test that we want so it should be more for the new packages so I mean this particularly for any packages it's uploaded if there's an automated test we can do that it will fail then we want to reject immediately rather than get it into unstable to just get an immediate response back to the maintainer or as immediate as we can that it's broken and a fairly clear response that this has to be fixed before we're going to accept the package at the moment what we do is we put the packages on unstable and a few weeks later when it's coming up or whatever we have people file lots of release critical bugs on them which works but having it not get into the archive in the first place would presumably make the release management job a little bit easier and would make the problems a little bit more obvious so I think that would be really cool it could be interesting to do it before the release is testing but before the release is unstable like the build of the archive the creeper build will take about 8 hours if they remember to compose them because you have to wait for open office to be built in one of the nodes you only tried one package that's being uploaded at that moment or you might try everything that the package built to build depends on the package as long as there's less than 50 and openoffice.org isn't one of them but I was thinking that you could do that while the package is unstable then you would give it 10 days to run but if you do that you're still straight back to the release critical bug and the maintainer possibly not worrying about the release critical bug for a while and so probably having disrupted the transition because we're looking at this like it could already have to that set for new packages because we have a lot of packages going on that immediately pay to build so the problem we're doing with the new packages would be that a lot of these things are based outside the US and we aren't able to export new packages but actually it would be enough you would even say it would only run on EC8 in most cases which should be a way to do it in the US and by the way you could export if it was first released in the notification let's build to this build and then accept it into the Archive it needs to be accepted into the Archive before we can export it that's the way we're doing the export you can send an application immediately but it doesn't help because we're not doing that it's available in the Archive so we must do the both at the same time I mean we could change it that would be a lot of effort isn't I don't think we'd want to do builds I mean I agree that it would be fast enough but it would be more efficient and I was thinking I was also thinking it's such an automated test as far as I know all few parts and also the build testing project all need manual review at the end to and we do that before I do the testing it needs to be fully automated it would need to be a manual selection of the particular cases that would warrant rejection rather than warning cases that we're not quite sure and they all need to be automatically testable possibly need to be able to be overridden via a line in the disk or sources or control or whatever so was your different migration already discussed Have you seen the proposal I wrote to the February release list actually it's a fact issue of course I agree that we should fix it it is part of the duck suite isn't it no it's really it depends what the proposal actually is just to where it goes it goes on to interblur me so what's the proposal what's the proposal do you summarise the size of something the proposal is to have basically all UDEPs blocked by default or source packages that have UDEPs blocked by default to have some packages in a hinge file that migrate automatically so basically a generic unblock for any version that's unloaded and all others would need an unblock by the DI release manager before any other hinge are applied okay when you say the DI release manager you mean the release manager no I mean the DI release manager I think I think actually we could add one or two persons to the release team I think the release managers need to confirm about that the problem is the problem is that the only person that can judge if UDEP can be unblocked is the DI release manager because there are undeclared dependencies between UDEPs it would still need to be the DI release manager or that's fine with me but he's still the DI release manager but that's just automatic addition of the freeze line that's regularly updated for all packages containing UDEPs but it's also adding it would also require full dependency checking for UDEPs which currently doesn't happen oh right yes I see so you also want dependency checking so that if you want to freeze one package it depends on another one that doesn't go in rather than manually checking so basically this full dependency checking can get good for a lot of cases he currently unblocked UDEPs please read the mail I'll send you a link on ISE I have identified four categories of UDEPs based on why they can and cannot automatically migrate and why a manual process is needed in addition to purely the dependency checking put that in ISE by the way yeah you've put it up already okay good by the way I saw a few details I would like to see a bit different but generally I can propose this I already did so you did? when? I haven't seen anything on the mail you've got to read it no I haven't I haven't seen a reply I haven't type it that's what my mail already sees now I'll send it I will bounce it from my mail okay so a couple of the other things I had down were doing cup releases so everyone knows about DODH's cup constantly usable testing so doing stronger security support for it doing installable releases for it on a more frequent and regular basis my presumption I guess just need to add an extra suite to the archive that we pushed testing into just as we more or less would actual release I don't think that's the most a technical issue technically I think it's easy but I really see this as cutting the issues that we need to make sure that we could upgrade from one testing version to the next testing version and that's a non-serial task and there are also the issue that especially for some installation related bugs you only see them after packages migrate to testing because basically nobody installs them stable so you will need to do installation testing before any countries well I mean that depends on how musical you actually want that to be constantly 7 usual constant means at the same state not at a really high state that would need to be a high one I must say that I'm fairly happy with the bedtime archive releases we have done for Debbie installer during the edge line of time I think they have been usable and they have been sustainable as well the only thing that's so close to about months or so before the next beta would come out and the old beta was usually unusable yes but you're going to keep that anyway because well only for some installation by the way for 4 CDs would still be fine yes and to be honest I don't care much about net booting at such time because the way I've been managing the releases I've tried to get those periods as short as possible by basically making sure that everything was in place and there were no serious bugging issues before starting the transition and before starting the migration to testing while Joey had different approach he would migrate due depths much more frequently from unstable to testing so there's also release management differences involved so we'll have to contact the media yes absolutely so ok another one has everyone seen the more more on a vehicle slash serve slash srv slash f2p dev in but old slash more slash pool has all the at least all the old sources and the last 6 to 12 months the old devs so if you've uploaded something to the archive and expired and it was one of the times when snapshot that they've been all lost or so available that it is on the virtual mall and one of the things we could and probably should and should do and have been talking about for a little while is actually making a public source more so that we can save any source that's been uploaded to dev in the last 5 years is available underneath this URL so that's sorry that lets us get the gplv3 things so sources available for the next 3 years at this site which also means that the derivatives can then not have to worry about packaging so redistributing all the source for all the dev in stuff they're based it on just the source for the stuff they've changed which at least some smaller distributions kind of like the other day would it also reduce the problem we've had with di that some packages were updated after the last rc release it would solve I don't know the gpl sort of way it would solve it in a technical way but getting the dependencies and stuff checked properly would that be I'm not worried about that because they are mostly minor updates at that point so the main concern has been I hope to have the exact source version for those udebs available yeah we've also got a patch from have we put that patch in place the one that's still not right so I should put that on let's do that first I only said with five not there's your mistake so we've got a patch that adds higher support the udeb packages used to build sorry that adds all the udeb packages used to build the tub the by hand the di into a spare suite that will keep the packages in the archive they will keep the miners in full because of how it works also in the sources at the moment we don't have that patch because we actually had it in the IR upload but now we did it wasn't the last the IR upload always a hand directly and use suite for it that's what this patch uses again this way it's automatically not longer than by hand it's also a common suite for every single suite other than HDI and such HDI but it's a hidden suite and we don't ever see it so it's alright exactly it works it's an alt keyed out no only if everything just falls for the memory so never clean it up that's okay you don't see the ugly I see it close your eyes does anyone want to change the source of the QZ to be using a secure hash instead of only 5 you said that's a good one there's talk about this on having security and there's no point no no no that's for the announcement list where we'll do the verification using SAT already for you this is for the changes file where it will only include an md5 sum of the files we're uploading is it more ascended to check than a security measure anyway it's the only security measure we have if the md5 sum can be broken then you can take a valid pgp signature bypass you can take somebody's valid pgp signature or something they've signed replace the package that they actually upload at the moment I think it would make sense to allow optionally a new method so it works well enough somehow duplicates the old method and somehow let it die but I don't think it's too urgent to do it is it possible to add extra ashes that are in the mode by existing tools the problem with we've already added the extra SHA-1 and SHA-256 to practice files because we just had an md5 sum line and we've got md5 sum SHA-1 sum but changes we've just got files section which doesn't really give us room to add the SHA-1s in any kind of obvious but you could you could say to have one column ash if you add an extra column then that breaks the format and some other tools will break it no but you could replace md5 sum with something like sh1 column and sh1 column sum like it's done in the password fields for example can we update the tools first and allow the use no no I was actually thinking that we don't block out loads in the back until someone decides to update the tools that means we don't get any uploads for a while that means we don't get any bugs oh that's an interesting look at under I guess that could be a problem what I mean is I think to support the tools do we ignore everything that starts with I mean we define an extender checksum thing that includes the checksum name that is identifiable and at that point you ignore everything that has checksum names that you don't understand and upload the tools for that you don't change checksum at any time you just add new checksums and two new ones will be ignored so we don't break the tools for the next versions you can do a hack and have a new field like sh1 and duplicate the final formation on there with sh1 instead of md5 a multiline field yes just have a field that says hash5 that gives us some present defaults hash5 but that still means if it's going to be the only hash that's in the file actually you will still have to update the tools that read the file before you can allow it before you can allow for it to be the single-day unique yes, right and I think your proposal was to add an extra file section with dash yes it would be enough to just replace md5 with she8 sha1 yes, almost sha1 then we have some practice in replacing them yes you do need to I think the smart thing is to update the tools that process stuff first and then allow the the tools that the file to use the alternative the idea would be to have uba with waplo changes and ideally have sources be generated with sha1 and sha256 at the same time as md5 is still working yes and then the tools are updated at least in testing and maybe even as stable only then start blocking the time as soon as the tools are updated in all of testing we could start allowing sha1 or 265 uploads actually if the difficulty source start generating sha256 files sections as well that just gets known but then there's a place of uploads that have it and allow to test the archive stuff but actually the bad thing is that if you add a section I should call the consul a little bit hard so it looks a bit ugly to say we have add-in headers that just defeats all the checksum it's better to just say h-type is z-z and that you use I agree but then you have no transition you have tools that parse the file support both but at the moment you can't generate a file that says correct something else because you have to update it yes, but that's it you have two advantages one, you can already start generating packages with that which you use to test the servers and the second advantage is that you can test multiple hashes no, but you don't need multiple sorry and then you have a transition period you can just continue using the current format which means there's no h-type the file which means it's md5 and at the moment you start the file and say this package now has another h-type only from that moment on it does do anything so I think it's also dsc and charnel and that's consistent in the ways how changes files are written so I really like that idea very much now how many places are there tools that process to change this file I mean dsc dsc yeah, the dsc in the source is a more a problem I think the main issue is it's always the changes file because dsc can't have something named as dsc very better than with changes changes signed anyways how many tools are there that actually do those checks and where do they live when I sponsor a package I get the dsc in a signed email and I don't know the rest and I use the dsc to play the trust so I do want to have two hashes of that so I hope that you check of course all the content that is sponsored to the Archive anyways yes, I do check all the contents but if someone puts my issues code I can't check every c file I do that diff don't check every c file now your blog page but if you count it's duck the database has some sort checked so it's easy once it is something like check signature but even if it's too hard then it's easy to fix then we have deep package source the app is in a similar source deep package source then we have deep package changes and there will be things like deep work so we have 10 packages is that used in checking multiple hashes for other security that is if we have sh1, sh256 if someone breaks sh256 and not sh1 then we can fold back on sh1 it's an example or whatever I'm not sure that's actually beneficial in security test I was wondering about that I'm not sure if we multiple ones one gets broken we fold back automatically on the others because we check all of them at the same time but it depends on one that can be broken at any time so if someone breaks sh8 maybe md5 is more secure than broken sh8 no not more secure than not broken between that particular case actually if someone manages to break many of md5 sh1 they have far more interesting goals to break than they've been archived at least I would know some of them I wouldn't be so sure because you could earn a lot of money if you could break md5 sh1 and not just that in archive yeah but never mind if we have one that can be broken of course once it's made public you can tamper with anything you want but no one would make it public because that would just be a lot of waste of your money speaking about billions here it's really a vague lot of money in good good good good lots of people are wasting maybe not so much money maybe you're interested in making money with the code but simply by the disruptor havoc and the disruption that you pose yeah but I don't think I don't think you're speaking the current state is nice for md5 not for sh1 there are a few concerns because a bit some are getting too too small which just means take a large algorithm and the large algorithm would be sh1 to 256 and that's for the current state okay somebody who is able to break just to fill in another change in some like 10 years time a change from now it's easy to fix it because we can just say that the fuse all changes are a lot more than a week old that shouldn't be the issue for this lot it's a beer by the way next point one more we just pump by our seats I don't even know that if you streamed it and have a new one just read it out for the rest of the it's for the request on a c which is TPKG sick support which signed binary package but I have two more shopping that is well by the way you should stop soon sick food is your problem look my so I have no idea about the package sick support does it check in then and I personally don't see the point of it so I just need more is there a way to publish changes file in a public place without risking replay attacks or something like that um I I'm not even sure that the changes files are published on local at the moment but just on the definition you can find first source files you can find them publicly in the BTS and stuff like that because they go via changes by the way I would play with a change file so one thing you can't recover is testing just from changes files you need to have some idea which packages are meant to go in there the only thing I think they've been used for is to verify the actual depths on a recovered hard drive from like the archive are the ones that were uploaded um and as long as it's signed by developer then there's not really a replay attack available there like all your checking is that it was the one that was uploaded so a replay says that it was one that was uploaded which is what you're trying to find out okay well I was more thinking of the U.S. against that in a new way but so nothing is really implementing us from publishing the changes files um well they are published by dotangers well not only from source workloads I've not looked from there was a lot of mail and address they gave everything I don't subscribe to it so I don't think there's there's nothing stopping us except bandwidth like human or machine or whatever um we've definitely got all the changes files even queued down on FTP master I'm not sure if we sync them but the idea is to get access sure but I just don't know if we do um mostly because the directory is rather unpleasant um and we don't really have them anywhere publicly available but that's more that no one's done it for any reason and I mean they're less than thoroughly useful because they're signed by a thousand different people and they're only only five people you know those are things we should be fixing anyway or making less problematic anyway in case of disillusioned you have like five billion more things charged yes of course and I'm not too sure if it sets really a good idea then but um I would concise that it would be helpful if we start using code names for all staples and stable I was even testing for well reasons that you can remember we had building with issues all staples I think it would be a really good idea because that would stop us having to roll over security stuff as well but it's a very damn thing so I try not to care about naming the dimensions but it would really help and we do with other wannabe sweets and you just don't notice the least in wannabe because it's just bad so how do you change this in the sweet table and and you change the and you change the outcome we could perhaps try to do that then we release Lenny we could do that right now it's just a matter of changing we have two different sets one is with the deck and the other one is with the wannabe which is independent but both I think would be really helpful if we do it I disagree I just I just should do it I'm not just doing that without discussing with James of course I didn't suggest otherwise James I have another one but we will discuss it later I think I just want the topic and I hope to remember it before we release that I think after that before just because I want to have something to eat now anything else from anyone other than any how does that handle build books and by users it doesn't it doesn't see a build book at all if you upload it you would have to have it be a by hand section and get the money for it would that be interesting to know I think but the problem is that all the building stuff is done by one build so all the one career stuff is completely separate to that so you have to have somewhere to get it over there and I am not familiar enough with the building stuff to know how that would work so one of the things I'm going to put into more is building a build log from one build and then add another interface for people who do manual builds to also sign those in a way and get them into a system for me just to go back on that the idea was that maintainers that are manually uploading the source send their build log for the architecture they built so that it appears in the build log so they are thinking of starting actually they have spoken just send them a build log to the light adders which just appear on build.do I just don't do it because why shouldn't we wouldn't like it should be someone to authenticate it oh I don't mind it just works currently so why should we change it that's another question as I sorry let's put an address in the topic of WDF I am not going to make that topic change we can also drop all the ashes from the changes ashes is even in the ashes the package in the ashes it tells you where to send the mails to so even I know it yeah just a random development guys ashes do you have seen the last question on the IRC yeah could we just somehow add adding it's a build preview to that it's not yet complete but it would help so stay with Leesman and I have no idea what p you need to apply just take a look it's basically a shortened version of leesa where comes leesa I don't remember the name that's okay leesa has put this joke oh okay it just goes over the p you new queue packages and you could add commands accept it, whatever why the commands why does that need to be in dark rather than just in so well as home directory or whatever release on home directory well I think it fits better in dark but well I mean if it's just all the stem release managers want to regular use and doesn't need any commissions then it seems like they will release managers that have control over it because if it's in dark then you don't have commissions to change it which seems stupid no I don't because I don't have commissions to give commissions oh damn it but this was a plan actually has sourceful only uploads been already discussed in this part didn't that mention this part okay next topic I I I I I see there's a problem that so they come to any use for architecture independent packages and they're all built on developer machines you can't do a binary when you get in here because if you don't change the source then the architecture also really can't really change and they are all built on developer machines but I mean build days are ultimately developed on machines too yes but I don't think you can really compare the environment you have a stable the build disk gives you the advantage that you have a stable environment yeah you only think that because you haven't really looked into build days okay like a people that run on a development machine that's updated regularly is probably a better or a cleaner environment than build days great yeah like the build days have to take some optimizations like not removing packages just so that they don't get slowed down too much because they have to build them on packages especially on slower architectures um um so I mean they're relatively clean and relatively up to date but if you're doing with people then that will give you absolutely minimal staff and absolutely up to date and build days will also give you some other checks like checking that your build doesn't fail when some random other packages you didn't expect have to be installed which is kind of useful sometimes too um yeah I mean so a build day would be better than an out of date testing or a stable environment that you might happen to have on your laptop for sale but a Pbuilder is probably better than that internal so like my personal view is that having it be very easy for developers to set up Pbuilders and have have those, I have the same appearance as a build day quarter so you upload, you make the source available to your people Pbuilder, it grabs it makes it easy for you to sign then it automatically uploads for you I think that would be a better solution than equivalent and somewhat better solution but actually some people already have such environment probably the best thing is just that somebody writes it up, puts it together and we keep paying really to it yeah so if someone wants to post that they can make the actual machine available to other people then doing the equivalent of a build day is just a matter of mailing the build log and the changes file back to the person who asked for it to be built and waiting for a response that includes the sign changes file signed by a dds key and then uploading it forward and that is pretty much literally building environment the note back changes needed because they're already included from when Mortimer was using that source on the first so it's kind of off topic Anything else? Food? I say food. No, I'm Jacks, excellent