 However, we have Martin Zobo-Happes, Pete Fowler-Rader, and Steven Gregg. So, hello everyone, what is your name? Welcome to our first ever DS8 talk of whatever, communication at that point. Well, we have almost the entire DS18 year, only a little is missing, at least probably somewhere in the same Canada. So, yeah, starting up another machine. Well, there are a few formal matters we would like to thank very much for their years of service. There's Phil Hans, Ryan Murray, George Holtze, and James True. They've all done a great job over the years, but now the new team has taken over. I'm guessing most of you know this sort of thing, Phil, but this is pretty quick. We try to look at three structures for various teams, and that is doing things. One of this is making sure they have the software on the machine and then you either or making sure that they have our dedicated machines and resources. We look at their various services for them in things like email, word transfer, development transfer. We look after the infrastructure around the county system that the BWR is going to be associated with all that stuff. We look after a pretty reasonable amount of hardware, something like a machine to spread across non-futuristic 30-odd locations. And we do all the usual routine work of security upgrades. We use a few tools to make our job easier. At the moment we're using public although suggestions for something at some point so badly wouldn't go best. Most of our configuration and management tools are done with various capabilities. We're mostly on DP-Deminarum under Slashkit and ones that aren't are usually mirrored. You get Deminarum for every year or something like that. So you can clone whatever you want and kind of play with whatever you think. We're trying to keep everything that isn't essentially secret public that people don't have that long. We use UDLDAP which is our very special user management tool. We're driving all that basic house. We need an audience for monitoring and statistical data. We look, we use an RT instance to track the community requests and that sort of thing. There's where you can contact us. We also, we just want to talk to you people today. I actually, hopefully, get some idea of what you want. Just very simply, where is the RTSA? RTWR. Yes, so this is the office on its own. Are there any questions you want to ask DSA? What can we do for you? I was wondering, there used to be a lot of statistics for the websites that were published. It appears that the graph websites don't seem to be published anymore. Is it by design or just happened to be published anymore? I look really American, I don't know the website distance, but if you think that kind of thing is useful, we can probably cut the webcasts and not begin. It might become, or it might be more difficult now is since how much is distributed over 4 machines or so, and do GODS. So depending on where the both are you get a different server. But if bad people are or want statistics, we can help them set it up. What's perfect and why does it work? It's a distributed configuration management tool written in Ruby. Yeah, Ruby is not right there. But things like different files done over XMLRPC over HTTPS. So it's a bit cheesy, it's like slow, and in the case of like Perth wrong. We have our very own special DA makeup tool, which I think has been removed by Jogi a few years back when we got a lot farther than that. It's basically getting all the rocker around Arsene doing various magic things and hopefully also backing up our data. We are considering looking for something... Yeah, it's not all that horrible and I'm not sure there are any other makeup tools that are working for our kind of environment because we are having systems all over the world and we can't really do a full backup of all of them every week or every month. So using Artic is really probably the same way to do it. But if anybody of you knows the tool that is... No, maybe not. I don't know anyone. We are trying to get away from like a manager software. So if anybody of you has suggestions, please come and talk to us and tell us why your software is the thing you should use. Can you look again at the Arsene products? Not for W or specific like a very expensive product like this one. It's the Arsene product that I'm talking about. That's where I got the name on the DAV issue. The one talked by DA and I used it on the way. It's very similar to DAV. So we did basic functionality wise and avoid taking care of the packaging. My question would be suggestions on the role of the teams in Debian that has their own infrastructure. Even you, we have like five teams. We have been able to coordinate things with the different teams in Debian. In other words, we are using side-submarine, a tool I wrote based on the university to keep track of machines. And it's been used to generate NAVYOS and new configuration of the market. Have you had a look at similar systems? Sort of. Next up on our list now that we have at least some sort of sexualized management with public that we were going to move towards other generating NAVYOS. Configuration standards for the services we know that machines are growing. And that sort of thing. But there's already tools out there that do a better thing. My hack is where we work that we use all the time. How's the SA considered moving away from password-based plugins to Keownee? Or possibly using lots of Debian funds to buy a redeveloper, a shiny token or a spot card or something? I've written the public class to turn off password-based plugins. And it was sort of waiting for this talk to announce it before I pushed it. So, yeah, probably the next day or so password-based plugins are going to most likely go away to network machines. And have you thought about how you will cope with people who need to move one set of data from one Debian machine to another Yes, we actually already have a very NAVYOS system right now where we can, if you need to move files from Reveal to Merkle. On Merkle, we can set up a secondary authorized case file for Reveal. It works. We're looking at a patch to yield out so you can set that sort of thing up yourself and pull the deploy date to Merkle. That was really helpful. Of course, if you just want to do something, it actively just won't hire you. So, maybe the answer here is to forward your agent but haven't prescribed it so that it always asks for information whenever you say a key. So, now that passwords are going to be used for logging, are there plans for using LDAP or other services? Not on my part. Unfortunately, you can still use your LDAP password to log into LDAP. Now, as far as I work out a way around that, I'd like to let your LDAP password as something that will be used for that. Of course, if you have a service that needs password-based authentication or something else, we could probably add company to LDAP once more so that people can add an additional set of passwords for different services. We already have that for you since you know all about your machine, but extending it for other things that are possible. As far as web services go, do we have an online provider that's getting us to be available? No. Will we? Not for me. Would you mind, if so, please? I'm not sure I like security practices, but maybe, so... No, no, no. If you could somehow ensure that it's never used for anything really important, then maybe. But these things tend to creep into even more important things, so maybe. Hopefully not. What we had to consider at one point was setting up a carousel domain for DevinOrg, and maybe we could use carousel authentication to lock into web services, but I have recently tried how much that sucks, and it's not working today. How is LDAP supposed to work? Well, if people really want to set up passwords for other services, we could route that by LDAP, but having something that doesn't require that amount of code for every user would actually be very nice if there is something that would give the client such a guess. Maybe I'll get somebody else's answer, but it's named WDSA. How complex are the dependencies between the systems that work? What's the most complex dependency in the department? I remember the other day you were saying something about mail-growing, and you noted that something like that. That's more annoying than the complicated. We have a lot of things like that. Probably the thing that has the biggest potential to go wrong is all the mirror stuff on Merkle, because it depends so much on chance to work. Yeah, the archive mirror, the database mirrors. Okay, for somebody new to this, do you have a diagram of your network that we can actually see what the hell is going on? Well, we have what is in Niggas. Niggas has all the dependencies for this host, it's apparent for that host, so that gives you at least the network or a course network layout thing. Oh, we could. Everybody can get Niggas Devonorg, and if it's asking for a password, and the username just use dsa-guest with a dash as the username, and any password of your choice as a password. It's whatever you want. You could also use no password at all, but then Firefox will not remember it. So this is... Well, it doesn't fit completely, but that's what we have. The center is Spore, it's our Niggas machine, and then we have the various routers that present one location, and for instance, at the bottom right you see GW Amanda, which is the gateway at some place in Germany, and then we have several machines behind that. But it's as good as information we have in the graphical form, and it doesn't really show any service dependencies or that kind of thing. Yeah, of service dependencies we usually find out if something breaks. Would, therefore, mapping the system be a good idea? Would mapping out what's running where be a damn good start? Well, we do have some form of information about what is running where, because Niggas keeps track of various services. It's just that we don't have any service dependency information anywhere, and I'm not sure mapping this is worth the effort. It might be. If somebody wants to do it, please feel free. Niggas config is on Git, so just pull it and get that worked out. Wait for the mic. Well, actually what would be asked for is more than Niggas configuration. It's be documenting our applications to say, well, we operate the applications, FTP services, which has the following dependencies, the following interlinks, offers the following interfaces. If someone wants to do that, that's a very tedious work. I know that. Yes. That would be very valuable, of course, for Debian. The big problem, of course, is doing that once is interesting. Doing it continuously becomes useful. Doing it once is hard. Keeping it up continuously is very, very hard, because it's well automated. And most of the services are actually run by various teams and not by us ourselves. Good luck with that. Does it work for you? It works. The same authentication password works from Monin as well. What we would be interested in is how, what would you like to have changed from the current mail setup that we do have for the Debian Arc domain? We had some kind of internal discussions recently if we might restrict sending mail only from some certain hosts. When we moved Gluck, Debian Arc to some other hosting recently, we had the problem that we were mostly setting up a service provided to four or five Debian developers, but they were really hard depending on that, which produced more man arrows for the DSA team than it might be worth doing so. Steve, he's doing most of the mail stuff. Perhaps he can say a bit more about that. The particular thing we ran across with Gluck was we provide a method of spooling and picking up your mail with some Debian machine called BSMTP, where Maxim will write some file that you can then pipe into your MTA later. There are exactly four users of this service, and they all had it pointed at Gluck, so when we're getting ready to turn Gluck off, we asked what seemed like a sensible question. Does anybody actually use this anymore? Could we just stop offering it? And it was a warm discussion, so we're still offering the service. But we have quite a lot of other bits in our mail system in particular that feel like we're doing an awful lot of processing of mail and a reasonable amount of admin overhead maintaining these services for less than 10 users of each. We have some fairly gross hackery around QmailStyle.forward files and various other small things like this. I'd like to... One of the things we want to do at some point is uncouple at Debianorg and at Master Debianorg, because Master may not always be there. It might be simpler to move to a front-end MX, but we can't do that sort of thing with the current mail systems we support. So I'd like to have some discussion today or later by email, whatever, about how much people rely on this. Could we move that information into LDAP? Could we do something else for you? I don't know. I'd like to put that in people's mind. So just a wish list feature request from me. I would like to make it so that external boxes, i.e. boxes that aren't .devian.org, cannot send mail to desilvers at devian.org because the only people that should be are, to be frank, FTP Master. And I'd probably say about 25% of the spam that I receive has come through Debian Machines. Can you open an RT ticket? Actually, you have spoken to a lot of, how should I say, the main services on Debianorg machines. I would really think it's... If you say it could be too much work, and just say which services it are, tell you it's okay. We will continue to provide such services for the next one year or whatever, and after that we're prepared to lose it. This Q-mail stuff was already deprecated when I joined Debian. This is also a few years ago now. So I really think we can just shut it off with an appropriate warning period. Same goes for a lot of other issues. You would think that. Yes, I will. Some built-in maintainers are currently using it. And about half the services actually do whatever.Debian.org mail use them? Yeah. So for the services it should be easy, because you can just say, well, we're going to fix it slowly. I know that you have a very good record of slowly fixing things, but then really doing it right. So that could be done with the services at least. There are only a few people left for it. I think MIA makes heavy use of this feature, and that MIA is not us. Also for mail setup and spam going in, like FTP MasterBox, which was heavily spammed in the past, in the last month got about only 5% of spam coming through. So that setup is a really big advantage over what we had in the past. We can set up mostly everything we want to have there, and the only wish one has when going through the mail setup is making it easier with the RBL list, which should not be as split as it currently is. I have to set every RBL on my own. Every Debian developer can do that. It should basically be a checkbox, yes or no. Do you have some statistics? This is something I'd actually like to bring up. Right now, every time we turn on some spam filtering feature, you know, RBLs, or reverse DNS checks, or clam AV, or whatever, somebody complains that we've eaten a mail that was spam that he really wanted. So we turn on some way a new DLDAP for you to say, yes, no, maybe so, but only with these RBLs and so on. Is that actually... This room of 30, 40 people isn't going to give me a definitive answer if that's a useful thing, but I don't think it's actually a very useful or a very scalable way to maintain our anti-spam infrastructure. We have several dozen people who have RBLs listed in LDAP that we do queries against every time they get an email that have gone away several years ago. When I notice this, I try to remove them from LDAP manually, but this sort of thing doesn't scale very well, and I'd prefer to go to a sort of, yes, I'd like Debian to scan my mail, no, I wouldn't, yes, I'd like RBLs, no, I wouldn't. I'd like to simplify it, but again, this is probably something people feel strongly about. So the thing that... excuse me, the thing that I feel strongly about is that you guys ought to be able to make those sorts of decisions and to constrain the amount of labor that DSA has to put into things that you think are of marginal utility. You know, part of this, I think, we get wound around the axle sometimes over the notion that, you know, part of being a Debian developer is you get a Debian.org email address that works, and I think somehow people translate that from a useful additional identity on their list of email identities to somehow being the thing that matters to them the most in the world. And, frankly, at the end of the day, if it's directly related to their process of doing work in Debian, that they're asking for some specific feature, I think that that's something we ought to give fair and open hearing to and consider whether there's a good or better way to accomplish that particular need, but I personally have very little tolerance for the notion that people are sort of pushing their sort of personal life needs into, you know, this expectation that somehow Debian and its volunteer administration team, you know, is somehow responsible for making their lives work perfectly. I realize I'm something of an exception in the world because I have never used mydebian.org email address in the maintainer field of any of my packages and things like this, so I'm probably about as far at the other end of the continuum of concern about some of these issues as anyone could be, but I certainly would encourage you guys to look at things like this. There's clearly a distinction between existing cruft in the system that can sort of stay there statically and doesn't really make more work for you and things that really cause you to have to do more work on a regular basis, and I think that latter category, you know, if nothing else, maybe this is where you identify a list of potential projects that you would encourage other people to work on, maybe as a way of, you know, working their way into being able to help your team, you know, on a longer term basis, you know, the Q-mail format dependency stuff. Here's an example where, you know, we call for volunteers to investigate whether there is a different, better, more maintainable way to accomplish the same functionality for the future would be a worthwhile project to identify. It doesn't necessarily have to be something that you guys go do all of this work on, so I would encourage you to think in that sort of way and to not be afraid to tell us when something is just too much of a pain in the ass for volunteer admins to have to deal with, some polite peer pressure within the project would help various folks to decide that, yes, now's a good time for us to figure out a different way to work personally then, I'm certainly happy to help with that. Maybe you have seen the recent mails Peter sent to the WM project with the open task lists, what we are currently working on. Maybe if someone finds the time to say, hey, I would like to help here, just pick one of the items on these task lists that would really, really be appreciated by the current DS18. Small comment answer to your spam filtering question, I don't think you need to be that fine grade as you are done currently, I would like to have three options, yes, I trust Debian to throw away my spam and yes, I trust Debian to flag my spam and I don't want Debian to touch my email at all. If I have those three, I'll go with the flagging one and I expect someone will be going with the throw away one and I expect someone who don't want to have any filtering at all. I don't really care about details if you do flagging, just do whatever you want and flag it. If you want to reject emails, I think that's a better way to do it and to accept it and throw it away. So if you can get rid of spam by not accepting it at all, I'm fine with it. If you actually accept it, then you should send it to me. Yeah. Yeah. What we also have quite a lot is that the Debian mail servers accept emails for various persons in the project and then forwarding it to their private mail account and that machine then rejecting email from Master Debian Arc which makes quite you one thing that could help is that one thing is to tell all Debian developers to just white list all Debian Arc machines. We already tried that. I know that it's the most but there's still some left to have issues with white listing email and the other thing is it might be helpful to have a command SSL key on all the mail servers that are used outgoing. Not common but from a common authority. So I can say well at last this authority to send only good mails. We currently have two different authorities, not all mail servers used on outgoing mails. Depending on which one we are or we could move to a hub structure for email and have only one or two outgoing mail servers. That's what we discussed recently in the latest review. How have we moved to that? Will I receive an email every time I bounce a bit of spam saying I've bounced one out of 270 messages? That's a list master setting. That's not the thing DSA, it's not DSA duties, it's on the list master's duty. It's just a warning that you might get kicked off the list if you bounce more spam back to list WNORG. Mambo, mambo, mambo, mambo, mambo. Thank you BDL. Sorry, it seems quite surprising but some of us might be in a situation where our primary mail host is one that we can't for example stop it from doing ahead of validity checks on mails coming from list WNORG because it's part of the general system setup or we're not necessarily in control of it. I personally had a lot of difficulty with Debian lists that they weren't, for a while I don't know if they now are, they weren't doing syntax checking on incoming headers on mails. They weren't doing send a call out verify which meant that lists then tried to pass mail on to the server that was receiving my mail and my server went I'm sorry I'm not even going to think about sending you a 3354, give me some content. Well, I I think there's some overlap between the teams and this is an interesting conversation it's not precisely DSA's problem so I'm going to say can we have a chat after? But actually I think still in this day the ACORN has to say please do not, even if you notice something, at least Ben would get from Debian ORG host, you don't make it better by the checks on Debian ORG host back under no circumstances. I'm about forwarding, I mean I understand how email works and about queuing and all of that but why couldn't we since we're not hosting mail are we? No we are not. Are there some people that we are for four people? Okay, okay, so we aren't or we maybe shouldn't be, we maybe should be a forwarding only provider and then I don't know if XM supports that but why couldn't the mail be forwarded and the connection to the sending host kept open if there was a rejection then you reject the mail on the receiving side of the Debian machine I'm not sure I'm expressing myself clearly but I know what you're talking about I'm not sure if XM supports it without a little hacking around but there are time limits on these sorts of things and I don't multiple recipients and it's kind of a mess so I'd like to not go that way Okay, as a non-Debian mail user this is striking me greatly as it's time for a policy document on what you want to do put it out to everybody and I'd say just do it There's one small suggestion from the University of Oslo they have a mechanism I haven't seen anyone else been using for rejecting emails it's basically the connecting side will not get a reply within a few seconds specifications say you have to wait for the hello message if you don't they disconnect every time the descending part do a typo in the communication protocol there is a time delay and the time delay is doubled every time there is a typo in the communication and this actually got rid of 90% of all spam delivered to the University of Oslo yet all well behaving mail servers will get their emails delivered properly Yeah, what you're talking about is something like tier grouping based on some variety of checks we already do quite a lot of these checks particularly for master tier grouping it doesn't make sense because there's a lot more bad guys than there are masters master normally at any given moment has about 100 open SMTP connections and we don't do any delaying at all right now so if we added delays master would conceivably have several thousand open SMTP connections all the time which you know the botnet wins unfortunately at the moment rough mail statistics the other day across Debianorg we do I just did the last 10 days of mail logs and we accepted a million and a half mails and forwarded on two and a half million emails something like that rejected about four million emails across all the machines and temp failed I can't remember what it was another three four million those statistics are really really skewed because things like the build these never actually reject any mail they don't do any mail to speak of but something like master Debianorg will do 30 40 thousand legitimate well by legitimate I mean accepted emails a day and reject a million and a half mails master is pretty much running flat out saying no thank you I suspect your assumption is incorrect I don't think the number of connections will increase because most spammers actually disconnected of the three seconds because they don't have time to wait for a long connection to to finish the conversation let's we can carry this on later I don't know you know if we need to talk about exact specifics right now but it's an interesting conversation so please let's have it I have a sorry problem with the mail I received from the list because I am receiving on my Debianorg address from time to time I receive a mail from the list server saying we have received bonuses from you apparently there is some kind of content filtering at master right is there a way to opt out from such content filtering on a per user basis we have a couple things the first is we are not an ASP if you want to receive mail I would like to add something I think you should maybe coordinate with the list master team we are already doing because I am one member of the list master team as well what we are currently doing is we are getting the forward addresses that are set in UDLDUP to be exported to the list server and not mailing the mail spec to master and then to the actual recipient but instead mapping all your Debianorg accounts to the real forwarding account and sending the mail from list Debianorg directly to your forwarded account the solution would be to make my mail to master not to receive the mail on master that would be the solution no your mail doesn't if you get a mail from list it won't ever get to master but directly to you at the moment well it will if his forwarding address is username at master because he insists on using progmail on master I am using progmail on master and so maybe the answer is for either lists to not actually forward the spam which we can easily filter out the master so lists could do it as well the other answer is don't do progmail on master please just get your email directly sent from lists well yeah I appreciate that people have complicated mail setups that have taken them 20 years to perfect but at the end of the day we are not in ISP we have no guarantee about not getting one subscribed from a mailing list my primary motivation is to keep the load on master below 50 you know so apart from email we are and email is a huge part of what we are doing or what Steven is doing we are still providing lots of things for various teams how do people prefer to have it easier to get services provided by us currently what people do is they send an email please can we have and we say yes but maybe several people aren't aware of that should we do any kind of thing to make setting up new whatever.debian or web services easier what people like there I am not answering your question what is the status of snapshots.debian.org waiting for one machine to get shipped to an institute in the UK and after that probably having whatever or the code I currently have is running on starvele.debian.org and every developer has an account there so you can actually go and look what's there right now the only means to access the snapshot stuff is through a fused file system so you can just see the into stuff and see whatever is there there currently is no web interface to it but maybe we have time to write one here at depconf otherwise at some other point there is no particular issue it's just there isn't one yet and we need to import some data from snapshot dbn.net as soon as we receive that data currently the snapshot on starvele has all of dbn.archive, dbn.security, dbn.volatile backport.org and dbn.archive since January this year it would be really really nice to import all the data from snapshotstabian.net which I think is from 04 until at some point last year but I've sent several mails to the snapshot dbn.net person and never got a reply somebody knows him please give him a we can get the data how can you import stuff that is no longer on ftp master from ftp master in the mock you well we can get the packages we can't know when they were editing stuff we can because we have the database snapshots we don't we do expire database snapshots we cannot get everything back from ftp master we do have the source going back to till 2000 or something but we will have a problem getting it into the right date and suite and whatever and importing all packages is somewhere on the list as soon as ftp master sets up morg.dbn.org oops what's the name of the of yours twice currently running stabile one issue that I found is when I'm trying to work on a particular port it takes a while for the build dependencies to get installed on the port epochs is there any way that the speed of that could be increased or maintainers could or install the build dependencies themselves or something like that well actually usually it takes less than 24 hours to get the build dependencies installed there were some talks within the dsa team on getting the porters doing that work unfortunately for me 24 hours later I'm in a meeting or I'm doing something else which is the issue it's certainly not trying to portion any blame on to dsa or anyone for this it just may be a useful service if this can be resolved so it could be streamlined it might be useful to give everybody sudo for app in the true root so I'd have to think about the implications of that yeah maybe only install so indeed I was maybe you have already thought about some security reasons which inhibit that but maybe is it possible to have something like cow builder or something as such maybe just building dsc maybe not logging in or anything like that yeah that might be we'll have to come have a chat and see if we can talk with anything that's wrong with it but in principle something like that seems something like that I'm not really sure I like the security properties of that because you basically get rude on the machine and I can't think of any way to make it less make it have less impact we've had other discussions in the past though about the possibility of doing something like developer initiated package builds through the auto building system and things like this where you know if you're working on a porting problem being able to have an interface that says I'd like this dsc and the associated stuff to be sent to this architectures auto builder stuff and send the results back to me instead of the normal auto builder maintainer since I don't want to bother him but I just need some auto builder to try building this in the normal build environment and let me see what happens and get the log and so forth you know ideally you'd like to get back the the live build tree so that you can go triage temp files and all those sorts of things but even just being able to get the verbose log back from an auto builder directly without bothering the buildy admin would be a really interesting way to think about finessing this what we discussed recently is that we might be setting up some sort of batch server you can upload a package to and then it starts building the whole archive base on that library we've got the machine or I think we have enough power on the machines to do on one of the well we have enough cpu we don't have enough power to power the cpu okay different question earlier you mentioned you've got 30 different sites 34 35 have you got any plans to consolidate because that sounds one or two too many well actually there are too many but we also want to have them a little bit distributed so if one data center gets switched off at one day not all of the debian infrastructure is down so maybe three locations would be perfect but 35 certainly is way too much if anybody wants to host a couple racks of debian machines come find us so um yes we do need power and network please so to answer your question Peter I think that maybe a mail to debian develop even maybe periodically sending mails like bits from the dsa team we have those new services we made those changes and if you want anything new please contact us and we're here we're doing that to debian project at the moment which seems to be more appropriate address for that at least from my point of view yeah I really like that I already said that but it was more of an internal work list something like an announcement of changes that we've made if you want to make the internal work list or an announcement email please do this whole communication thing is kind of new for dsa so well it can simply add a few lines developer news wiki everybody can edit it and just put the link to the debian project mail I think it's enough and well yeah I just want to use the opportunity to really thank you because well I've been following what you did since when elmo added me to the request tracker and you have done an awesome job over the years and really big kudos to you thank you also as for communications channels Tobu recently set up a dsa blog which currently I think has one or even two items and it's basically just a short notice that we are working on stuff one of them is the geodomains setup and what was the other one was there another one yes no yes and it's even syndicated to planet debian or so you probably already anyway we will do what we also were discussing on how to get new members into the teams into our team it's quite hard because you need to as soon as the person is a real dsa member he has root access on all machines so getting new team members is also some way of trusting with in debian there's no real trainee process for that so you either need to trust or you can't maybe if anybody has any good sysadmin mentoring programs that you're working with that work really well for you come have a chat because it's very difficult to go from accepting patches for the apache config to giving sudo on 90 machines and I have a little mental block on how to do it in a good way one thing we did at the university of tromso was to provide basically a menu for union or admins to run specific sudo commands they would be able to list and kill user processes they would be able to fix print queues that kind of thing day to day work where they only could do those things we had specified in the menu and in the sudo file but still they were able to prove themselves that way that was not really my main question we use RT at the university quite a lot and we have extended it to accept commands in the emails so we can actually handle all the requests by email have you considered or do you plan to do the same for you? no really but if you could send us a patch please do or help us setting it up it's our RT module from best practical actually we paid them to take our packet and make it an official one so it's an official extension to RT and it's accepting commands as signed email or how does it work? you put the command as the first section with command colon value and then you can do status colon resolve for example is there any kind of authentication? we have never had people that actually wanted to do ticket handling on their own without authorization at the university and we have 5,000 employees and 20,000 users students so I don't think that's a real problem but you can flag configure RT to only accept it for GPG signed emails for example I think that's a waste of time personally but if you really really want that sure go ahead what we have also established recently is that some certain groups may for example restart Apache with their new configuration so we are just checking that the Apache config is valid and has some certain for example that it doesn't start a new virtual host and so on for this build dependency problem that was brought up earlier I think 24 hours is a honorable amount of time to install build dependencies and I appreciate all the build demaintainers who have managed to get that kind of turnaround but I do understand the when you're sitting down to work on a problem and then you have to stop because you don't have a package installed is very frustrating so have there have been any thoughts to something like a developer initiated temporary time delimited virtual environment that someone could have do you know of any good virtualization things that work on all of our architectures what are your criteria for the word good having access to it root in it does not mean having root on the system itself isn't that the case for most of the containerized virtualized environments if you think change root is a virtualization then no otherwise then maybe but I'm not aware of anything that works across all of our architectures except to change roots yeah I guess the linux v server project is the only one that does in most architectures which one the linux v server project also what yeah it's building on probably we don't have the disk space on all of our porting machines well it's pretty small to get a minimal charoot we are running out of space with the existing change routes as soon as people don't delete their gcc build yeah that's why I was saying time dependent like four days and then it gets deleted automatically or something maybe to just go back slightly further because time warps are fun you said on one of your slides that puppet wasn't really doing it for you that it was functioning but that you you perhaps felt slightly dirty every time you had to touch it as a result do you actually have a functional spec that someone could take out and go and evaluate other things for you to perhaps find something else for you to use pretty nearly yeah conditional inclusion of various categories of configuration management depending on some host specific data and templating of files that need to be on every machine but must be different depending on host dependent data that's real and the ability to run a post or commander for files updated that's really all we need that's exactly what puppet does it just doesn't do it that well I just wanted to add to the build dependency issue that I think 24 hours it's quite good and I think it's only solvable if there is some automatic way because there will never be a turn out or time turn around time that is really useful for people currently working on a problem and needing it at this time you're absolutely right this is something that I've been thinking about frankly for a very long time helped us set up I guess it was the auto building stuff for either the second or the third architecture that we added support for after and the challenge has always been that trying to apply technological solutions like user initiated virtualized clients and so forth generally only possible on the architectures for which we need it the least architectures that have broad support for these sorts of new technologies are also the architectures for which many of us have access to a physical machine and the challenge has always been how do you chase down a porting problem on that machine that's sort of in the margins of you know whether it should still be part of our stable release process or not and in those sorts of situations this is this is why I've personally been driven over time as I mentioned earlier to think more in terms of how can we do something like a developer initiated package passing through our auto building system which is something that we already have you know sort of a project dependency on it's something that DSA and the porting team already has to sort of take care of and maintain the hack if you will of having you know physical machines have a given architecture available for developers to log into and try and do interactive work on there are certainly cases for which that's the only way to solve the problem and in those sorts of cases I would agree that I think if the admin team is capable of meeting routinely a 24 hour response time for helping to get the right environment set up to do those sorts of weird you know last ditch effort kinds of activities that's great but you know it is on the other side for the sort of more routine trying to track problems down thing 24 hours is utterly unacceptable because very few of us have you know bursts of availability that are you know that predictable in advance and if you ask for something and then you have to wait a day or something before you can sort of take any further action chances are good that you're off on you're off on some entirely different problem you know perhaps one that your wife suggested was more important so this is this is why I think there has to be a balance and I think it would be very interesting to pursue a notion of other paths through the auto building thing as a way to get around but I think there should also but there could be a way to do it securely to have some script that runs as on the pseudo which is very limited you can only specify packages from trusted archives which it should install into some specified change route or something like that. Can we discuss that after the talk? We are a little bit running out of time I just want to make one final remark on contacting us there are two lists where the Debian Atman team is having sent mail to and there's always a bit confusing on where to send what email. I wonder why? The Debian Atman at ListDebianArk is including also all the local Atmans which are porters at some time, some of them are porters so if you need package installed on one of the porter machines, send your mail there. The only reason to contact DSA via DebianAtman at Debian.Ark is if you have confidential data which should only send to a very closed group other than that just send it to DebianAtman at ListDebianArk there's I think some 18 or 20 persons on that list which is much more than just the DSA team and there is the hash DebianAtman on IRRC if you want to contact us do it there but don't stay just don't stay around in that IRRC channel because if it's getting too crowded then we can't discuss DSA internal matters on that IRRC channel Thank you all for coming thank you for being a friendly audience and thank you for your suggestions we're here all week let's have a chat if you have something you want to talk about