 Yeah, so about stable, volatile, backwards, and security, actually. So first, I want to have a look at current policies for each of the suites or archives. You have here, for instance, the one for stable. So if you want to do an update in stable, we want it to be a fix for a security issue or a fix for a critical bug. A critical bug that's then critical for the users or for the maintainers. It can then also be, of course, a fix for installability or buildability, or to get the architectures back in sync. And then, of course, how it works is that you send the mail to the release list with a patch for review. And the patch should actually already be applied in unstable, so it's already a bit tested. Then, if you look at the one for volatile, don't know if you can read it, but there is a whole list. The thing is, volatile was not official, and one of the remaining items that remembers of that is that the package should be prepared in coordination with the maintainer. So currently, we actually expect the maintainer to do it himself. But of course, there can still be uploads done by non-maintenors, but normally it's the maintainer. And as you can see, it even mentions that volatile is not backwards. So things that arrive in volatile are actually not backwards and vice versa. And one of the most important things for volatile is it's for data that is volatile. So data that changes a lot. But the only thing that is accepted in volatile is packages that make sure it keeps working with the volatile data and the data itself, of course. Then if you look at policy for backwards, it's actually quite easy policy. You have to make sure you're aware of all the issues you have with the backwards, so you should subscribe to a user list. You should make sure that what you want to have in backwards is already in testing. And so back parts are usually new versions of software. You want users to be able to install on their stable system. But of course, these new versions can also introduce new regressions. So it's important that you follow up and fix the issues, especially the security issues yourself. And the main policy is that a package should first enter testing before it can enter a back part. And one of the things the FTP masters of back parts don't like is that you would upload just a rebuild of the package and testing into back parts. There are quite some issues with not allowing it, but that's the current policy. So users are actually considered to use pinning for packages they want to use from testing directly and next to it, the back parts archive. So for stable, it were only really small fixes for security issues and really critical bugs. For volatile, it's all about data that moves fast, like for instance, for definitions of antivirus and things like that. Or for instance, time zones, things that change a lot, but data. And of course also the packages that are related to make sure it still works. And back parts is actually anything with a newer version. But of course it has already to be in testing. And if it's exactly the same as in testing, it's not allowed. And then for completeness, I also included the one for security. For security, it's a bit different in the sense that if it's not public yet, you want to make sure that it doesn't get leaked yet. So the first thing you do for any security issue is normally you contact the security team and they tell you what you should do. If you should make it public, if it's already public, if you should send a patch to the bug report, if you should prepare an upload or someone already preparing one, things like that. So like you can see, the most important thing is first talk to the security team and only then upload, when they agree of course. Sometimes the security team thinks some security issues are not fit for the security archive. That can have many reasons. But one of the reasons is sometimes that they lack manpower and they think the issue is not critical enough. In that case, they advise to contact the release team to see if it's still fit for point release. So the thing I really want to see is that instead of actually four different policies, I want to see a joint policy for all the suites and archives. So it's really clear for any user and any developer in what suite or archive the package should reside and how the user can use it, where the user should find the updated software. So my idea was to ask the audience, maybe even some people on IRC, to see if they have ideas how to make that happen. So if you have suggestions, please shoot. No one? I don't know. Sorry if this is a silly question. I'm reading in your screen that it is recommended to send only the DFG set on the DSC files. How do you do that? No depth? You mean on the screen or? In that screen, yes. That line. Is that very new? That's for the security archive. The thing with security archive is that the security team wants to really be sure that the build packages are built by machines of Debian itself or by themselves. And not by a maintainer because it can be that the environment is not clean enough. Things like that. They want to be really sure that the source corresponds with the binaries. They would provide it by putting them somewhere in people Debian or, for example, not by actually uploading them. The normal procedure. The normal procedure is that you contact the security team and then they'll tell you how they like to proceed. In the normal case, it's indeed that you send the patch or the source package to them. But sometimes they also ask you to upload. As there don't seem to be any suggestions yet, I'll show you a bit more about one of the issues. So in volatile you have, for instance, Klamath, Klamath data. But for instance, also Pigeon. So why is Pigeon on volatile at the moment? Pigeon doesn't have any volatile data. Well, not as far as I know. But the thing is some packages, they rely on APIs that are out of Debian. And there are APIs for protocols or APIs for web applications. Or they scrape some web pages and interpret themselves what they should be. Or you have something like Flash plugin non-free that downloads the binaries. So at install time, it downloads the binaries to install them. These kind of packages, of course, if something changes in the API or the binary provided by upstream, they are not installable anymore in stable or they don't work anymore and they lose functionality. And then the question is, of course, where do we fix it? Where do we want the users and developers to look for the fixed package to be able to continue using the package? And for Pigeon, if I'm not mistaken, it was the Yahoo API that changed. And we decided to fix it in volatile. One of the things we want to try is when we move volatile to FTP master proper. So I'm merging the argrives but still have a separate suite on FTP master. To make sure that you can first upload to volatile and when everything looks okay in volatile, everything works. Everything is stable that we consider moving it to some point release. So that's already one of the things we are considering. So, for instance, Flash plugin on free was not included in the lany release because we thought it was too often that the binary upstream changed to make it sensible to include it in stable because probably half of the time or more it would just not be installable. If we would have a mechanism that packages like that could end up in volatile or something like that and make sure that we sync from time to time to stable proper, it's probably a better solution. But sometimes that's not enough because for volatile, it's only meant for volatile data. You can interpret an API as data but sometimes you need so many changes to comply with the API that it's not really sensible to even include it in volatile. There was meant to be a volatile sloppy for that but volatile sloppy at the moment is not used at all. So maybe if we can make a joint policy that it's way more clear for developers, maybe we can solve that too. That all packages with fixes for stable can end up in volatile, volatile, sloppy or a point release. And then of course we still have backports. I think it's not a good idea to exclude packages from testing. So the same packages in testing from entering in backports and the reason is that it's even more complicated for the user because he has to use apt pinning, backports, volatile, volatile sloppy, security and stable all in one source.list to be able to use it all. So then he even has to use testing as well with pinning. What do you think can be an improvement in that regard? Is this all? Okay. One of the things I always thought because I usually run stable and I do local backports for myself, most users can't do that, was probably a way to officially backports. It's probably one of the main things we should focus on. But some way of unifying them is probably unifying backports and volatile and these kind of things and actually say, okay, we officially back this up. I know it will be soon, but I know when I first started using Debian quite some time ago, I was really reluctant to using backports because it was clearly stated this is not an official resource. And it turned out that after Debian 7 it was clear for me that, well, it's not official, but it's backed up by Debian developers. So it's in a sort of way it's official, but it's not clearly stated that Debian actually supports this. Is this done now? Okay. It will be soon backports Debian org and as such official, we already had the talks, it's just a matter of doing the work and getting it over. There is another thing that is missing that is the bug tracking system. It currently doesn't know anything at all about backports or even volatile versions. So if you report a bug reporter gets a volatile version, the BTS doesn't know where to fit it in. That's basically due to the way the BTS gets the version information and the other archives which under an FTP master don't send the version information over. As soon as volatile is in the main archive, volatile version information will be known. And I think we can manage to get the BTS to also know backports Debian org versions. We just have to push a set of files we are generating with every upload. From my point of view, security is not an issue for an user. It's something that is very well controlled and I don't think for users it should be visible that they use security. I mean, it's there and security bugs will get pushed to it and automatically, from my point of view as a user, security is not an issue. If backports will get into the official and will be officially supported, then I think that keeping volatile for data and backports for programs that needs lots of changes due to changed APIs or something like that should be good enough and have only two categories of extra things on top of stable. You said that you were thinking about some kind of, what do you mean by a joint policy? Applied the same rules or a document which explains everything in only one document or what do you mean? I want to come to one document that explains all the suites you can upload to so that it's clear for a developer and then also one document that explains to the user where he can find the software he's looking for. I want to come back to the issue with respect to security support. We managed to get backports supported in the security tracker so it's currently also visible, possible to follow which issues are still outstanding. It's still lacking a bit of manpower to get it done. I usually go over the list and ping the maintainers but it's just me currently following up to that and that is possibly one of the things that really needs to get done. I'm trying to bring up a service that will help with that but if you want to help me with getting security support for backports into a better shape please come to me. Also I want to point out that my feeling, my personal feeling and some of the users which I've talked to is that stable although it's officially supported I always had the feeling that there aren't that many people actually supporting it. So Debian developers mostly try to work when some package of their own isn't working in unstable but once stable it's out the door, most of them just don't care about stable. And I think we should probably take that into account and try to figure out a way how we can manage to motivate developers to use or at least try to be more supportive of stable and add more manpower to that. The reason why I'm doing local backports is that there aren't any official backports and the reason why some applications end up being backported by myself on my own archive is just because I want a newer version. Or maybe, I think you get the point, maybe you should just try to motivate people to use, to add more support to stable actually, just motivate it. I personally use stable as I said mostly because I think it's what the users use usually even on desktops but after six months I found it really hard to actually still stick with stable. Because we don't have, let's say, desktop applications and things which evolved really fast for stable. The developer shouldn't support stable more in that case. What you want is basically new packages, new versions in stable and that's by policy not done except for a few exceptions at the dot and a half release. What developer should support more is backports and we try to get that support more by making it a backport.devion.org, making it more known to developers and probably more easy to upload. It will always be an archive that has its own policy and rules to follow but maybe people like you who are building an archive should not do that but go into backports and do that. It's not that hard to upload to backports instead. I don't have official Libyan developer status currently so. Well there's an NM process for that, that can be solved. Yeah I know but that's the current situation. Well even for backports you can find sponsors. Was complaining that the actual policy of backports has a rule that you don't have to upload packages which can be just installed from testing. And for edge backports there was a situation where got packages uploaded which don't have the need to do that. I see that it's easy for users to just throw the upline in and install it. But there was ending up packages which was uploaded just once and then nobody took care of it. So if you lower that entrance rule and make it official maybe you have much more work for the people doing security work. If that is the security team from I don't know testing stable there will be much more work and you won't end up in better quality of the packages. So I think if it's official people guess it's supported by any security team and that may be a wrong illusion for the users maybe. Well if it would be easier to migrate packages from testing to backports that wouldn't be an issue because we have testing security support. Rafael who isn't here right now mentioned on the ISE that the DDPO males since the last two runs they do support and mention RC bugs in stable. So that the maintainers are also aware about that right now. I just wanted to say that I think it's the wrong impression that the security team supports official packages in stable or potentially in backports. It is more of a collaboration with the package maintainer and the security team. And if the package maintainer is not participating in that security work then unfortunately the security team is stuck with the work. So it's more the package maintainer that needs to be supporting the security rather than the security team being stuck with it. And I think developers need to make sure they are aware of that before they put a package that is going into stable or potentially into backports. Yeah I fully agree. We should make sure that maintainers really take care of their package and not only in unstable but in all suites very denser. The problem is that the packages on backports are not in any case uploaded by the package maintainer. Sometime it's uploaded by anyone else and so the maintainer maybe not aware that his package end up in backports so he doesn't care of it. And if the guy who uploads it to backports doesn't care about any security stuff or something like that it's in a state of clearly unmaintained. Also the changed log doesn't get back to the original package so it's kind of lost because when the packages are loaded to backports. I don't get that. The changed log of a backported package doesn't get back to the original package. Which is logical because not a single upload in the other suites is based on the backports one. But the thing of course is when you do backport you should contact the maintainer first and get his agreement before you upload. One solution potentially to that problem is if all the maintainers of packages were if backports were an official officially supported archive and you were as a package maintainer doing your job and supporting the security of that package as well. I'm sorry supporting that security of that package in stable but as well uploading to unstable maybe developers would also be responsible to also do a backport and maintain that it might might be more than what people would like to take on. Oh no if you're the pigeon maintainer package maintainer you are responsible to maintain the security through the life of stable as well as uploading to unstable but then in this case also uploading to and doing a backport to the backports. There are often enough people who do upload packages to backports and don't care about updating the package there anymore. This is most of the problems we have with security and backports. If the people uploading the packages to backports would track the package better and not just uploaded once so they have their pet feature in backports and don't care for it after it anymore that would definitely enhance and improve the situation. Making it official and having better integration with the BTS probably will already solve a part of it. So hi I'm I'm a little confused. I'd like to take a step back here and sort of understand the goal behind having a consistent policy for all of the above. What I'm really hearing consistently across all these messages is that these various distributions serve different purposes with different levels of support. I mean I think one of the things people like out of backports is that you can get something backported even if the maintainer doesn't want to do it and isn't interested in supporting it. That's something that some of the users of backports want. One of the things that people like about volatile is that you know it has less review than stable updates. One of the things that people like about security is that you know it happens privacy. I mean basically that there are some things that go into security that are confidential and you don't know about them until they get published. And one of the things that people like about stable is that it's really conservative. I mean I guess it might be nice to have all these place things written down in a single place. But I'm having a hard time understanding how you could have a consistent policy because you don't have consistent goals. Well I was not talking about consistent policy but either rather having a joint document specifying what the rules are. So that instead of people always coming to the release team asking about volatile and about backports. And that they know where they should go and how they should proceed. Does anyone disagree with that. I don't know. I mean I guess my question is if no one disagrees with that is it is it harder than getting the right. What makes it harder than getting the right people. You know the maintainers of backport security stable and volatile in a room with a wiki. True. But the thing is also that some of the policies that are there now are a bit outdated. They are they are not always complete. And like the rule about not having packages from testing and unchanged uploaded to backports is not mentioned. And so I think the policies should be complete. They should be up to date. And they should be in one place so everyone can find them easily. And everyone can also have way more easy and see what's outdated. What needs to change. How do I decide where to upload or where I should find the package from what I've heard up to now. I'm still confused whether we all agree on the goals of the different distributions and the support levels. And I think this is the first thing we should agree on. And then we should have a policy which is written down anywhere. So I don't know whether we have reached this already that we have agreed on what we want. And yes maybe some others can say what really is the state. Whether we have an agreement over all in the room maybe. Well I do think that all the suites have a different target. And so in that regard they are all different. But I think the support you expect in all of them is almost the same. You expect that in any suite and the packages maintained that the package when there is a security issue that it gets fixed. Maybe not as fast as in the usual archives. But you expect that it's maintained that it doesn't break that it keeps working and that it doesn't have any security issues. And one of the things I also want to mention is currently there are some package maintainers considering setting up yet another archive. And the reason being that many of the web applications are architectural packages. They don't need any change when they would end up in Backport. But as it currently is not allowed and they really want for their user base to make it really easy to install them. They are thinking about setting up another archive. So it's easier to run a whole archive and it's a whole crap of it you have with it. Instead of just telling the users to add two lines to its config file instead of just one. Well the thing is if you need to explain pinning you can keep on explaining pinning all the time. There's also an explanation of pinning that is ready to use for Backport org. For Backport but if you want to use testing because you want the Backport at once. You do want the testing version then. You just have a pinning configuration for all users that are using testing. I don't see that much problem in doing it. Also maybe just talk to the Backports maintainers of getting the policy adapted for such kind of web application. And then show some activity in maintaining those packages also because that's one reason why they aren't currently left on the archive. I might propose then a somewhat radical solution to this which would be to create a build D for Backports that would just automatically backport any package uploaded into unstable. And the FTBS failed to backport from source. Mail would be sent to the package uploader who could then try to fix any failed backporting. As someone who has uploaded a lot of backports to Backports.org I very rarely have to actually make any source changes to actually backport even other architecture rather than architecture all packages. You just make a change log entry saying I backported it from testing and you build it. And the only problem with the such a build D would be that the backport would not be tested by somebody. But this might be a way to get backports for quite a lot of packages very easily and backports that failed could have the proper email sent to the maintainer who could then work on fixing it. I only have one amendment to what Micah said. He said when somebody uploads to unstable but the policy for backports it's to backport packages from testing. So when some package transitions from unstable to testing it's probably the best time to actually try to do the automatic backport. About automatic backport it's still good to do it right next when you upload to unstable because then you have the gap time between testing transition if the automatic update fails. So the automatic check should be done with the upload to unstable directly. So you have time to work on it before it hits testing. The problem with this web app stuff is the maintainer the FTP master of backports doesn't like to support the PHP stuff. And I can completely understand that. But there's also the problem when users using pinning they often have a problem with this and maybe they end up in a system they don't like to. And so the goal that to get it into backport is concurrent to the problem that end up really huge crappy stuff into the backport archive. So if we can solve this problem we can maybe kick out the route to not upload packages which come installed from testing without any changes. Maybe that'd be a solution not to set up another suite. I like the idea of having packages automatically built for backports but what happens to changes to those backported packages like changed build dependencies for backports also. If a new version gets into testing or unstable this must be handled somehow I don't know. Well if it has changed dependencies or build dependencies it's just a matter of making sure that these dependencies or build dependencies are also in backport. If it makes sense of course. Yes of course if it makes sense. If we try to upload to backports every package which needs some dependencies which are only available in unstable we probably end up duplicate unstable to some degree. I don't think that's what we actually want. And on the other hand the automatic backports would probably work better although what it was said before it is a good idea to test early. On the other hand it's probably some wasted power because many of the versions upload to unstable never transition to testing. So I don't think it's a problem for people using. It's not such a real problem for people using testing stable that the version which landed in testing lands let's say one or two days or three days later than it landed in testing itself. So usually it takes more for backports to actually provide that version currently. So if it's a week because it's automatically done it's not such an issue. Now some of the time you don't have the version at all. So it's actually even better than we have today. You don't have to have an ideal situation from the other get go. That's my opinion at least. I'm not sure that having the automatic ones end up in the proper backports archive is actually a good idea because many of the backports are meant to be installed on their own. So not together with all the other packages. So if you want to use a backport it's normally you use one package from backports. If you are going to just have them all automatically built it doesn't always make sense because not all of the features really need to be backported. You still want a stable system where you use one newer version of one package or maybe a couple but not many. So I for myself dislike automatic backports too but in the end that's a decision for the backports maintainers to do. For the specific web apps case you mentioned I would say please go to FTP master first and talk about it. We currently need you to develop a feature to have multiple suites with various policies applying maintained by different people where FTP master is just supporting the archive and policy what goes in and what goes there is done by other people and web apps might be just as sweet without within that framework. It wouldn't be exported to FTP Debian org main mirrors because you don't want to have much more load on those main mirrors but we integrate Volatile we need a different policy for the Volatile archive so we integrate backports somehow and need to export it. In some case we will get data Debian org and have to export that and it wouldn't be too hard to just run another suite and keep the whole work of running a complete archive from other people away and just let them do the policy on stuff which gets in which gets out. I would like to also point out that since I think we probably most of us agree that the usual use case for backports is that you just pick up one or two packages which are interested in having newer versions or which have let's say features which are interested in and mostly run stable we should try to establish if we want to have app modified so by default stable always has as priority over backport packages and even though you would have let's say backport sources and stable sources and the version in backports is higher I don't know which way is better do you want to have this built-in app or you want to have pinning? You don't need to build that into app because that's set up in the backports archive it's not automatic yes so it will never select the backports version unless you tell it to it's one line in the release file that says not automatic yes and with that app knows I don't take any version from that unless the user tells me to. So any other comments or questions or suggestions? I think the one thing that people keep bringing up is what is it that a package maintainer should be responsible for when they upload something into the archive and we need to come to some agreement and make that agreement published in some way that I would propose that if you are uploading a package into the archive and it passes new that you are committing to supporting that package both in bugs, new versions in unstable but through the lifetime of stable for security support which is how it is now and I hope everybody understands that and when they are uploading new packages but then the amendment would be to also support a backported version of your package and if we could get agreement on that then we could move forward in a much nicer integrated way Hi, there is no way I am willing to support backported versions of my package that I didn't backport that just doesn't make sense to me as a maintainer there are some versions of my packages that I know aren't stable and I am very careful to make sure they would never get into a stable release and it works in progress and if someone were to go stick those in backports I would be, well I mean that's up to them but don't expect me to help clean up the pieces if I stick it in backports that's an entirely different situation and yes of course I am willing to support it but the thing is if you say you make sure that it doesn't hit a stable release you probably also make sure that it doesn't hit testing because that's the staging area for the next stable release and that's the area from which the backports is taken and that actually makes probably more sense into supporting the next stable release because if you think a package is not appropriate for the next stable release or you don't think it's appropriate for backports it's probably not appropriate for the next stable release so if you keep it out of testing in the first place that's a good thing but sometimes even if your package is not really ready for a stable release you want to migrate it to testing for transition reasons to not block a hundred of packages so in that case it may not be a good idea to backport it I think there are just simple, trivial technical ways around this problem by like filing a bug against your package to keep it out of backports by tagging it with something I mean there's easy ways to make sure that if we were going to go down this route that your package didn't automatically get put into backports if you thought it was inappropriate there's multiple ways that that could be implemented I think but you maybe have the same problem with backports if you keep out your package from backports you're also blocking packages which have been to need updated in backports so there's the same problem Okay, I think it's time now to conclude because well they're telling me that it's time so I think we can further discuss these things on the mailing list of backports, of volatile or even release so thank you all for your cooperation and let's hope that we get a better support for all of these suites and archives