 It's good. Okay. Hello. I'm Tollef. I'm part of the SAA team. With me today we have Zobel, who's also a DSA member. And we're here to talk a little bit about what DSA does. And obviously if you have questions and anything, then we'll be happy to try some. We'll try to keep that as some sort of round table because we wanted discussion with you. This is not going to be a talk. So whenever you have questions, just ask and we will try to answer. Okay. The DSA team consists currently of seven people. I think? Seven? Yeah. Most of us are in Europe, but we also have Luca, who's in Canada. Apart from that, Paroid is on holiday. And yeah, various other people. Yeah. The duties we have as the deviant system administrator is basically to build and maintain the infrastructure you are all using for running our distribution. It's the general sysadmin stuff. We are doing installing security updates, keeping machines up to date, keeping the hardware running, creating accounts for you, running DNS mail. Yeah. So one thing we actually don't do is we generally just provide the base. We provide the OS support for that. We don't run most of the services. So like list that an org, we're not the people you want to talk to if there is some problem with spam handling. Then you want to talk to this guy who's also part of the listmaster team. Similar bugs, W and org, the web pages. We make sure that Apache is running. If you find typos on the web page. Yeah. If there's a typo, don't blame us. I don't know how many machines we run in the meantime. I think it's something around 150, 160 machines in total, including the VMs. No, if you can't VMs, it's more like 250. Okay. But we run those machines currently at about 30 locations worldwide. Also part of that, our duty is to deal with the hosters and the local admins. So if they have firewalls running in front of our machines, we try to convince them to shut down the, to disable the firewall parts for our machines so we can manage those stuff ourselves. Yeah. This is often interesting. We have some locations where the machines are enacted and this breaks a secure NTP. So there are various places where we have to make accommodations because it's hard to get the hardware to be another place. Maybe it's their boards for an architecture which is being bootstrapped. So in some cases we kind of have to endure a little bit of pain for that. But most hosters and most local admins are really nice people, really easy to deal with and very, very accommodating. And I mean, we don't pay for any of this. So it's all sponsored and given to us for your charge. So we're quite lucky. It differs from location to location. We currently have locations where we have a full rack of hardware which we can populate with hardware and there are other locations where we just have one or two machines sitting and doing the jobs for us. Keep in mind all of us seven persons account are not paid to do a Swiss admin job. So we are all doing that on our volunteer time. So if you speak up on IRC sometimes you will not get any reaction within five minutes. But I think that's mostly clear to all of you. Yeah. Since we have so many machines, we like automation. We run PAP at everywhere. It was chosen some time ago and it generally does the right thing and generally works okay. This often makes for some interesting problems when bootstrapping because apparently Ruby is really, really awesome to bootstrap, right Steve? Especially on ARM. We also like Git. We have the entire public repositories in Git. Our domains are in Git. Our Wiki is in Git. Everything. Yeah basically if everything can be put into Git. You probably don't want to do it to database but like anything else put it in Git. We have some sort of account management tool which we are currently rewriting. It's called User Dear LDAP or UD LDAP. Luca has done quite a lot of work on the rewrite which I think it's already handling the generating stuff which is rolled out to the devian.org machines. All the other parts of UD LDAP are still using the old code base which is well ugly to read. It reads like Paul Bash written in Python. So if you have spare time and knowledge in Python help us to finish the rewrite job. Yeah so the new UD is written and it's a Django project. So it's fairly nice and well written. So what UD LDAP actually does is it has a local LDAP server which runs on the machine called Draghi which is DB, devian.org. And from there it generates static files which are synced out to all machines. So even though we're using LDAP for account information we don't have a single point of error. So if that machine goes down then it means you can't update your password or your sH keys but you can still log in various places. And it also works around network issues between machines. So if sH between Draghi and the porting machine or whatever works you can log into machines. We monitor our machines using Munion and well nowadays Ikingar. We had some performance issues with Munion but with a Weezy version I think it was. And the other stuff Munion works quite well for us. Generally if there are web pages like Ikingar or Munion asking for a password then this is just dsa-guest and either no password or just a random password. So just to protect our hardware or services so that script kiddies or whoever wants to see what his script is doing in effect to the devian services. Not seeing the results directly but everyone who knows how the devian system works you get access to that. It's also so we accept and land up with spiders walking around because Munion's web interface is generating the graphs on the fly and it's using our D tool. And that can consume great amounts of CPU power and web spiders are really really good at wasting CPU power for us. So we kind of want to keep them off those pages. To track our issues we currently have with hardware failures with accounts we need to create and so on. We use request tracker on rt.debian.org which some other teams use as well. You can either mail it or use a web interface, devian developers I think only for viewing the web interface. Yeah so for most people it's read only. Well you can interface with request 3 mail of course. If you need to send something there send it to rt.debian.org and make sure to include devian rt in the subject because else will just throw it away because then it's spam. It's really efficient spam filter. It's slightly annoying for when you submit the first ticket. The last talk we gave about the DSA team was I think two years ago. So we tried to summarize what we've done in the last two years. We have since when was that meeting in Oslo I think three years ago. Three years ago. Three years ago we decided that we want at least the infrastructure hardware not the porting hardware on machines that are under warranty. So we can open a ticket at HP IBM or whatever and ask them to send replacement parts when hardware breaks. We use server-grade hardware. Currently most of the machines are HP machines. DL380, DL360, DL580. They work quite well and I think we're mostly done with that transition. And it turns out that having actual service rather than something somebody threw together and put under a desk and then forgot about is actually makes for less pain and more uptime. We tried to consolidate the amount of data centers we are having core services running in. So currently we have like three data centers, three to five data centers where we have quite a lot of services running in. That's Manda, ByteMark, Garnet still a little bit. OCOSL. OCOSL, UBC. UBC EC. We also have some other places with fewer machines but since it's often painful to have a single machine in the location we try to avoid that. So it's kind of a trade-off. You want to have enough locations that you have resonance but you don't want to have so many that you basically have two machines everywhere. And each time there is a problem you have to deal with somebody who you haven't spoken to in two years because that was when the last problem occurred. For the core services we are currently using Garnetti for virtualization which is some sort of KVM based virtualization framework. It's a cluster manager which came out of Google and which works really well. It's targeted clusters from one to 50 machines and three software cores. It works very well for us. A target where I recently or where I tried to work on in the last few months is a single sign-on framework for web applications. Thankfully together with Enrico who helped quite a lot with that. We written the ugly code I wrote to Python Django framework so we hope to be able to provide a single sign-on also for non-debian.org web services which with the current software we use for Debian.org didn't work out for security reasons so well. Let's see how well we stand in two years with single sign-on on the web stuff. Yeah so currently we had a problem earlier this year where the backup server we had would die and then die and then die. And with various problems with it claimed to have hard drive errors but it looked more like controller errors and so on. Obviously running without backups isn't a terribly good idea so we bootstrapped another backup server but it was running in the bytemark data center and because we have many other services hosted there that's not a very good situation just because if something happens at that data center and it burns out then we've suddenly lost both the backups and the services being backed up. So two months ago so we got a new machine it's hosted at DGI in Düsseldorf and it's happily checking along and making backups. We're currently using back lift for the backups and it's working okay. We're having some interesting problems with scheduling of backups so we're probably going to need to do some fixes there. Luker is doing the UD-ELDA pre-write as already mentioned earlier we can need helping hands there. I think Paul and Peter are working on the snapshot infrastructure giving especially the QA integration for the snapshot. Yeah we had a donation from Leaseweb earlier this year so similar to the backup service it turns out that file systems or service when they grow big enough you end up with lots of disks dying and Linux isn't terribly good at handling this when you get enough of your disks dying. So we had one machine which died with controller failure again and we kind of tried to revive it it wasn't really successful so we ended up getting this donation from Leaseweb and we then have a small cluster of machines in their data center. The snapshot is currently I think 23 terabytes something like that in size which is currently the biggest archive we currently maintain. We try to roll out SSL everywhere in the past. Yeah we're moving basically and it's been something we wanted to do for a while to enable achieved PS and so on everywhere even on kind of public and open resources it felt with the it wasn't really triggered by but it was kind of in the same way in as the Snowden things then it was like yeah we should probably actually move forward on it because it turns out that there are entirely too many people who like to speed up in too much. We're searching for more SSL everywhere. There was some a little bit of controversy around this when we did it to people in org because it turns out that WGET in DI I think had some problems with verifying the certificates and so on. So it's not a completely uncontroversial and smooth move but we still like sometimes you need to make a little bit of sacrifice to actually get the security we want. Yeah and related to that also we push some bets towards using CDNs which also are interesting in the context of SSL because you have to give your search to somebody else and there is a trade-off there so you kind of have to trust your provider there. What we are also currently doing we due to the fact that we got a very huge donation from ByteMark I think one and a half years ago they gave us a full blade center and I think six MSACs. Three or four, well three chassis plus three X-Tests and chassis I think. So we currently still have some spare CPU cycles left at ByteMark and I'm currently setting up OpenStack at ByteMark data center for one or two blades. So in the end the idea is that Debian developers can start VMs there themselves similar to the VMs we are using for our infrastructure so we can more easily migrate Debian.NET services to Debian.org services. Giving you some sort of common infrastructure we use on Debian so you can help us to migrate services or we can help you to migrate services from your hardware to the Debian infrastructure hardware. And part of the reason for that is it turns out that running various half official services on people's home machines and co-load machines and so on isn't a terribly good idea because people will often they'll run for years and then somebody will get bored or they'll quit Debian or they'll go broke or like the machine will burn down or something will happen the services disappear and people get upset. So we try to take any services which are kind of half official and we would rather move them into Debian.org hardware. And so if you have a service which is kind of a half official thing and you want to make it more official and actually have somebody do the base OS maintenance for you so you don't have to worry about that. Then please come and talk to us. We're quite happy to provide you with reasonable VMs. Yeah. Yeah. How to contact us. There is. Well there are several mailing lists. There is a Debian admin at list Debian.org mailings list where we discussed I think last year that this mailing list will more or less be opened to every Debian developer. So Debian developer can subscribe to that mailing list as well. There's a DSA at Debian.org email address which we changed due to the fact that there was a Debian admin at Debian.org email address and there was quite a lot of confusion about the Debian admin at lists and the Debian admin at Debian.org email address. So we decided to move to a new email alias which is DSA at Debian.org. Yeah. We hang around on IRC as mentioned earlier in the pound Debian admin IRC channel. So feel free to join there if you have any issues just raise them and talk to us. Yeah. We I mean like any any people and any teams in Debian we obviously have more things to do than we actually have time for. So help is very much appreciated. There is getting help with SysAdmin talks is it's kind of an interesting challenge because you can't just give out routes to old Debian machines to somebody who shows up and goes I would like to rewrite your authentication infrastructure. But however since we keep the public repository and so on and get there's it's at least possible for people to get get in and contribute send us patches show up discuss things if you think that something can be improved. If that's quite likely and we would be happy to to discuss how to do that. Documentation is always welcome. There is a bit of documentation for things like dbw and all and so on. But more is always welcome. Also just hanging out on IRC answering people's question is often really useful. Yeah. I also really want to like to grow the team from the seven persons team that currently we had I think recently or a few months ago spoken to a Debian developer who is he here in the room. Might be who well said he will he currently does not want to become member of the DSA team due to the fact that he has too much other things other duties in Debian. So just talk to us and help us and then at one point we probably try or probably get annoyed with doing too much tasks for you so we just give out root access. It's how it usually works in Debian right at some point you you have contributed enough that it's annoying to it's more annoying to module patches and review them and just give you access so that happens. Well I think that's all about the slides and well just ask questions. I guess this is more DSA but the the list masterpieces are those in puppet as well. No this stuff is not in puppet what the XM config we are using on Debian.org machines is in puppet. But lists use this post fix. I don't know if Alex Wirt was also sitting here in the in the lecture room who could easily answer your questions for lists Debian.org as well. More questions. No more questions. So as to lad so as one of the local admins for a bunch of buildings I know that every now and again we keep we get asked for more stuff opening up more ports because we're one of those evil places with a firewall even for the DMZ. Do you actually have a central list of all of the things that you that you want to be able to to get access to you know that kind of thing would be awesome that I could just point say the you know the arm arm network. It says admins at instead of every now and again having to say oh and we need this extra thing and then backwards and forwards because their immediate responses. Well why you know if we can give them a list and just say you know give them a notification to say that you know there's a few new things we'd like it might go easier. I don't think like I don't think we have a list as such what we do have is we have a our firewall config is default deny. So we have a list of things we want to be able to accept on various shows. So even though we don't have a list as a go to this web page and here you have these ports and their justification we can we can generate that. So yeah that's a good idea. We should we should do something like that. Could you explain more about your backups system? I think you covered very briefly. Yeah. So we have do you know back left. So back left is a it's a centralized backup system using it's kind of a mesh mix of push and pull. And that you have a central director which tells the machines that are to be backed up that you are now going to be backing your things up to this storage demon over here. And then you also it also tells the storage even that please expect a connection from from this machine. We run. So we run the director which is this central component that runs in AVM and in bite mark the actual storage is a DJI and obviously the various backup. Machine being a bar bar. One of the painful things about back left is that it thinks that even though we're backing up to hard drives it still thinks we're actually backing up to tape drives. And that makes for the nice thing about about hard drives is that generally you don't really have seek time in the same way you have seek time on on tapes. Right. So you don't care about refining tapes and switching to a different tape and so on. That's called opening another file. And that's kind of it doesn't take very long. We also have the problem that back left will it will it doesn't have a concept. So if you look at the backup system like backup PC it it never does full backups. It will only do incremental backups and then has a hard lick farm. Back left will do a full backup than incrementals than a full backup than incrementals. This makes less sense when you have hard drives than when you have tapes. And also the scheduler isn't very smart and if it can't back up a machine for some reason then instead of rescheduling that backup. It will depending on how you configure it it will then just skip it. Some of our hosts aren't actually don't have that well good connections. So when you're then trying to do a full backup which can take 24 hours you really don't want that TCP stream to be disconnected because then you lost that full backup. And also it end up ends up batching the full backup so we have the very clustered rather than being nicely spread out. So one of the things we're looking at is writing a different scheduler for back left just to basically tell it please do a full backup of this host now rather than relying on the built in back scheduler. Second question. So I'm the maintainer of a package called BUP. It's not a full fledged backup system scheduler etc. But it does use for its back end get pack files rather than tapes. So if you're interested in get that maybe some interesting technology to take a look at. Last time I looked at BUP it didn't actually support expiring backups which makes for some pain. Right you can. There are some work arounds but you know it's one of the limitations currently. Yeah and for us that would mean we would run into like I'm sure C gate or Vestentage would be very happy but I'm not sure that our treasurer would be as happy. So we kind of need we need the ability to expire backups just because we don't have infinite size hard drives and backups are actually quite big. Well one of the one of the other issues with Bakula is that currently all of the full backups run at the same time. So we run into some sort of bandwidth limitations which is also not it's not an issue but it's it's annoying that all machines are doing the full backup. That's the same time. Any else questions. So you meant touched on earlier as well single sign on what's what services are next for that. That's right. Run away from the mic. Also I'll answer that as far as I know and then but then many people may have different plans. Single sign on is currently using DAX which I would suggest against in general having looked deeply into it. It was probably it's it probably seemed like it seemed like a good idea at the time but Internet moved in a different direction. So but DAX is still useful because it's an apache thing. So one can just put a directory of static files under DAX. And that can be done reason quite reasonably simply and at that point I want to discuss with the currently available the essays about finishing the DAX setup putting the basic stuff in puppet and making a guide for deploying new stuff. Any developer that that deploys devian services can set up DAX reasonably easily but the way that I see we should go in the future is all of to which is what we are using for the conference thing because well that's a bit more like. A standard that may work now and which hopefully supports logout. Which DAX does not do very well. Just a moment. I unfortunately I have not studied all of to so I'm not interested. Well I'm not it won't be me that does it. If any of you knows all of to and wants to sit down with me and explain it to me step by step during that com then please I would like to migrate an M and that being contributors to all through if at all possible. But I do want to understand the protocol before I touch it. So yeah. So the direction in as far as I'm concerned will be all to we may get stuck with DAX because it's in the race with a patch. But I'm not comfortable with it and there's too many hacky things to to make things work as expected. So I wish my personal dream would be to at some point move to all through and then replace DAX with just an all through provider. Other limitations of our contact set up is that it only works for the Debbie in that octomine. Otherwise we would need to give out credentials to the while there is some jurisdiction key and federation key. So we would need to give out access to them to the Debbie net services. That's one of the other limitations to the contact setup. So probably a while or two might be the way to go. But in the end it's up to you and to the Debian developers helping to extend the single sign on. As as new DAX services book see set up something book see set up something that uses DAX but I don't know what it is. It's a new PTS implementation but it's just for. I think you just want to know if a person is locked in then he can modify some news on the PTA on the new PTS implementation and so on. One good thing with DAX at the moment is that login is optional and it totally supports serving a site as it is. And if one is locked in single sign on then more stuff can happen on top. I think OOS 2 is a better thing for the wiki to do. Does Moin do OOS 2? I think I looked that up a few months ago and I think it supports OOS 2. DAX will give you a remote user variable. So in theory it's easy but if it does OOS 2 then it's more future proof in my opinion. So of course the fun thing with the wiki as well I mean I'm going to touch about on this. We've got a wiki and web buff see advertising too. We've also currently got like thousands of existing user accounts. Now obviously for people who've already got Alioth or Adobe and LDAP accounts then we will encourage people to merge and just move over to those. But for the many thousands of others who haven't we're going to have to come up with something. I don't know what that is. Yeah I mean I don't have an answer for that on the spot. It's tempting to say well they can just get themselves an Alioth account. Some people might be upset at that answer. I guess there is also a question about how many of these accounts are actually active rather than somebody registered back in 2005. I haven't used account since. I'd be happy to have a conversation about this during that conf because for WN contributors I require to have an Alioth account to get credited in site. Because I don't want to have a user database in WN contributors. Maybe too much of a strict requirement. It may be that we just document that if you do anything in WN you get an Alioth account. Let's talk about it separately. Yeah in that case I think we need to have a conversation with my other hat which is the hat of various other people with which is our admin account. Sorry admin hat. Yeah as the person who inflicted Alioth logins on everybody for DEBCOMP this year. I have been getting feedback that the in particular the sign up process for Alioth is a bit of an obstacle. So there's a few things there which I think we should talk about streamlining. As the person who decided that we were for this year moving away from Penta and moving to Summit I strongly felt no I did not want to have an authentication database. I didn't want password hashes in Summit and so I said yes we're going to have to figure out how to hook this up to Debian SSO. And the consequence of that was yes we had the Debian SSO which was only available for Debian developers. Alioth was the other database that was out there and so I guess my fault I apologize for anybody who was stressed about the roll out of that because I didn't entirely coordinate with all the parties ahead of time. But I think it's hanging together fairly well but we should talk sometime this week about where we should go forward with that and if Alioth is the right authentication provider. But I think it's important that we agree that there be an authentication provider for these kinds of services whether that lives in Alioth or somewhere else. With a flat username space yes which we kind of have today. Well actually so the way the way OAuth provides them is you get a domain name with it. So in fact all Debian developers have two different. Yeah it's a flat name space and Debian developers all have two they can use. More questions or. You mentioned that all our hosting is sponsored by the hosts and we get some hardware donations at least I think we buy some as well don't we. Yes. But my question isn't really about that it's more about how much support do we get. Well there's 1.5 well to 10 10 to one harder manufacturers on the sponsors there. How much support do we get from them for doing interesting stuff. Yeah I'm thinking like for this you mentioned we get fairly regular controller failures and some of our hardware and all the sponsors we've got have got nice but hard to set up. Multi path things and it seems to be we'd be interesting for us and for them to set things up like that on the Debian infrastructure. Is that kind of thing possible. Yes we do have that in some places like the bike mark setup the UBC is the setup and so on. That's a there we have a sand we have a bunch of machines and either it's doing sata or it's doing. And so. I SCSI I SCSI so yeah so we we do have a bunch of that. The problem is if you want to do data storage where you have available 25 terabytes and you want to do that on a sand. That's very not cheap. That's really quite expensive. So that's the reason why those machines with special special storage requirements in some like backups and snapshot basically they're different. And that's also why they these two machines because they have they have like 5 controller siege. So that's why they're they're different in that regard. We do get we do get a bunch of sponsorship from hot the hot web vendors. We usually buy HP gear possibly because we have good experience with it. It generally works connections and yeah we also have yeah historically good connections they've been good about giving us hardware in the past. They're happy to sponsor Debbie and that kind of both both in actual terms of money given to us but also in terms of giving us pretty nice prices. We haven't I don't think we've actually approached them about saying could you please give us this enormously expensive piece of hardware. It's often hard for them to give that away because it has to come out of somebody's budget and somehow they don't have large sands just hidden under their desks. More questions criticism. Hi I was just curious about your mail infrastructure. It doesn't look like use DKM or SPF for the mark records. Do you have plans for any of that. There's been some experimentation with the domain keys. So Luca has been playing with that. There is this interesting part. Some people we generally don't provide outgoing SMTP for random people because that may provide every year. Yeah. Obviously you get to Debbie and all you can't. You get incoming email which we then forward on to somewhere where it hopefully you'll actually remember to update that when that kind of expires rather than giving us bounces. That's that's a big change which we forgot to mention is that we actually are in the process of reworking the entire way we do mail. We have drastically reduced the number of incoming mail servers. So most mail now goes to a set of two. It will increase in future. It will increase in future and at MIT we will open up one more mail server. Well we can. Currently we have two. And then if there is special mail routing needed then it will be right to the right internal host. But most hosts no longer lessons for incoming mail from the Internet which is a good thing. Not only because it means we don't have to run spam assassin everywhere. Peter did this. Dane. D-A-M-E. SMTP. Yeah. Weasel Peter wanted to do or want to do Dane opportunistic encryption for outgoing outgoing emails. So we're experimenting with a bunch of things. For what I was going to say about domain keys is that because we don't provide that to going mail service it means that you need to be able to provide to the infrastructure what your key is going to be. And Luka has been working on some patches to UDL lab to do this so it can show up and DNS and so on. So yes things are happening. Like if you're interested in that absolutely do grab us and we can talk more about that. Okay I think we are done because the time is almost over. I have one more small announcements announcement to make. The Luka offered some ripe NCC Atlas notes to give away and the persons who applied for those notes and got in the list of getting those notes please come to me and talk to me directly after the talk. So I can hand out those notes because Luka is not here at the DEBCON 14 this year. Any plans to use UB keys. I'm the main or I'm part of the main energy keys. Maintain a team of the UB key tools and in that in I would very much like to use them for some things. There is a bunch of we need to find out how they should best fit into the infrastructure if we're going to do that. One thing which has been mentioned is for some cases we want to do actual two-factor. Currently there is no two-factor authentication anywhere. Help us setting up source infrastructure. You might. So yes there are no concrete plans but yes we're very much aware of UB keys and I'm kind of looking for good places. To put them in because I like them. I like both the company and the products and they're also quite happy to sponsor free software stuff. I think we are done. I think we're out of time. Thank you for being here. If you have any more questions.