 So I guess it's time to start, videos ready? Almost. Okay. So welcome this very early morning for the DSA-Bof, DSA is for Debian System Administrators team with Fidenl Ljambotis. And I will follow the questions on IRC if there are some other Debian System Administrators are already on IRC. There are following. So hello, my name is Fidenl Ljambotis, I am part of the Debian System Administrators team. The team is that, it's Luca Filippozi, Martin Zobel Helas, Peter Paul Frater, Steven Gran, and since last year, myself and Tolef, we are the two newest members. We joined the team last year after Depconf. Unfortunately, none of the others could be here and join us, although I think there are no IRC, most of them. I chose unfortunately because I was the least active member of the team, but I happened to be here. So our duty is to maintain the Debian Org machines, basically, except alias, but when we talk about DSA Administrative Machines, we're talking about the machines that are all of Debian Org. We do general services admin there, we do the accounts in LDAP, some core services like mail and DNS, and we provide support, and we provide service and support to teams that are on Debian, like FPMaster or Bagsmaster, or Porters, and Bildi, and so on. Multiple teams here, I won't talk about all of them. And we also deal with hardware faults, and we deal with our hosters, we deal with our local admins, so if we had a hardware fault in one of our many locations, more than 30, at one point it was even 50, I think, we talk with them. We use Puppet a lot, we use Git a lot. UD LDAP is our custom LDAP configuration, account management, basically. We use Nune and Agios for monitoring, and our T for our bug tracking, and many, many, many other tools. So, since last year, since the last year's buff, we had some new things. So, we met up in Oslo in March. There's a mail and Debian development announcement for that, but in case you haven't read it, I'll talk a bit about that. So, first of all, what we discussed was the long-term infrastructure plan, also known as the five-year plan. So, according to that plan, all core services, that means basic services plus FTP master and web servers and so on, must be under warranty at all times. We had a situation in the past year where we had some very old machines, very slow as well. They weren't under warranty. We basically were waiting for them to fail and so on. So, our plan is to be able to use Debian's funds or sponsorships to renew this hardware every now and then and keep at most five years and also buy warranty for these five years. And we want to use also server-grade hardware rather than individuals saying, I have this desktop PC or this old server on my basement. I don't need it anymore. Let Debian have it and then we use it for something core. So, we instead want to use server-grade hardware and maybe use money donations for that if we don't have any hardware donations for that. Additionally, we want to centralize into three to five core locations for our core services. So, now we are, as I said before, we have 30 locations and this is too much. This is also problematic with communication with hosters. So, what we want to do is consolidate into three to five. There are some special restrictions like that the FTP master needs to be in the U.S. So, this is an exception. This also doesn't affect ported boxes and build these because for this kind of hardware usually need some special different hosting like, for example, at ARM. But this affects all the core services. And we also want to use virtualization for that. We believe it's maturing as we speak and it makes sense for us to give new services a new VM in one of these central locations. Finally, we talked about users and groups. We want to do some cleanup and for security reasons, like disable and use shell accounts. There are over 50,000 shell accounts on Debian machines in total and some. We would like to move on by disabling most of them if they are not used, but not prevent people from doing work by providing an automated way like a signed mail for them to be activated. With the same direction, we want to also periodically confirm group memberships like a yearly mail, perhaps, that says here's the groups you're in if you don't want to be in any of them reply to this mail or the other way around. The goal of these two actions is to reduce exposure, not remove anyone's privileges or stop anyone from doing work. And another thing that we're talking in this meeting, but this is also, there was work done before the meeting, is the new single sign-on service. This is still work in progress, but there are some services that use it already like NUNIN or the new NEM website. The goal with this is to have one single password, not the password that you have for DB, but a different password that you can set. And this will be used by all web services around Debian that want to somehow often decode members of the project. You can contact us by Debian admin at least. This is a list that includes local admins or Debian admin at Debian org. This is only the core team. We used to hang out on IRC Debian org at Debian, hash Debian does admin. And of course, for all requests, you can file up a ticket in RT. So this is mostly a buff in the description of that schedule because we really want this to be a Q&A and get input from you or questions or anything else you want to ask the essay about. Do you use SAN for storage? What kind of SAN? We use HP MSAs on one of our sites and local stores for some other things. The MSAs are being used to host virtual machine images mostly and FTP master. How many machines have we actually got and where are they? So some idea of how much stuff we have. So we have around 135, 140 physical machines plus another 40 now I think, the virtual machines. The central locations that we host machines, the three to five are the University of British Columbia in Canada. The metropolitan area network of Darmstrad. The Greek research and technology network. We have some machines at the Oregon State University. We have some at MIT. In general, we have many locations for many machines. These are the ones with most machines there. We have some at SEAL in Austria. I'm sure I'm forgetting some. So of those 140, are a lot of those buildings and only relatively small number of core buildings? Yeah. This includes buildings and portal boxes, which are the number of architectures times two or three. You mentioned some goals in reference to centralizing the servers. So what are the major issues about from the more of the service itself in reference to universities and host and providers itself? So there are two issues here. One is that we're having many locations doesn't scale. You have too many people to talk with. You have too many issues to work with. The other thing is that we really want to, the criteria that we set for choosing these locations is that we want really professional hosting. We don't want people's basements or under persons desk. We want data centers with redundant power, with air conditioning, with IPv4 and IPv6 outer space, and the ability to manage or reverse DNS. We want responsive hands-on, like if we have a failure or we need this group replacement, we can ask and get a reply soon, rather than repeatedly ask, and hence responsive local admins. In general, we want something to be reliable and professional. This is not the case for many of our 30 locations. But for the ones that we chose, this is the case. We also want a lot of rack space, so we don't want small hosting locations. We want to have as many machines as possible. Hostors that give like 10 U or half a rack or a rack maybe, it's ideal for us. And good connectivity, obviously. I wonder what considerations have been made about the risks. If you have all the servers in three to five locations, it seems like if you lose one hoster, which seems even more likely if they're hosting more machines, if they're basically doing more for us, it seems more at risk. But what happens if you lose a location, 33% of all the machines with it? That's indeed at risk. However, as I told before, these are really reliable hosters. So they're more reliable than most of our current hostings, so we're hoping it won't come to that. That's one case. The other one is that these are very diverse locations across the globe, and everything that's important, we're trying to have it in multiple sites. And also note that this doesn't include mirrors. This is about central infrastructure, not the mirrors. Yeah, I guess I was thinking in terms of, you know, there's changes in the organization there, and they decide to no longer donate those resources to Debian. We can always find alternatives. We have more than three to five offers. So we can always move things. But usually we ask for some commitment before moving things into such a location. And we've got that commitment from all this organization I talked about before. So what is our sustainable budget for hardware? How much are we spending at the moment, and how much are you proposing we change that by having better kit? I don't remember the exact figures. It's certainly more than we were currently spending. So this is a considerable amount of money that we ask the DPL. The DPL seemed to be open to that. I don't remember the exact figures. I think we're posted on the list, I think. Yeah, but it's an amount of money to have good hardware. We also get discounts from vendors, and if we can get free hardware, we're open to that as well, obviously. One of the things we discussed at the meeting was to ask to have a sponsorship page that we can ask for money more effectively for sponsors. We were looking at the sponsorship page, with the listed sponsors, and they say these people have been kind and gave us platinum sponsors, something like that. Our current sponsors page is really outdated. It has some entries that are people who donated a disc or something, and on the other hand they don't have some major sponsor of ours. So we want to change that and be able to give more visibility to our large sponsors. And if anyone wants to help on that, we'd be welcome. You mentioned that Master has to be in the US. Can you just? For legal reasons, as far as I know, but some export control of the US. I'm not really sure of the details. Maybe someone else's, but as far as I know, for legal reasons, FTP Masters should be in the US, so we can't really move it to another hosting location. Another thing was, I wanted to, thanks for all the effort you made in publishing all your SVN, Git, and configuration repository for Perpet, Netgeos, Manino, whatever. It's really great to publish that, so people can get inspiration from that. These are public. All of our puppet configuration is in public Git, all of our Nagas configuration is in public Git, and several other things. Most of them are in some DSA machines, the Git repository, but we're mirroring them on Git Debian.org. So if you go to Git Debian.org, there's Debian.dsa, and there are multiple Git repositories that have all of our infrastructure. We also have all of our documentation, our internal documentation in the public. It's on a wiki on dsa.debian.org. So anyone can look there and see what we're doing and how we're doing it. And in general, we don't mind getting asked on how we do this. We're thoroughly open with what we do. About single sign-on, would it be possible or has it been envisioned that we could expand this to non-DD users, such as guest alias, or dev conf attendees, for example? I think it has been discussed, but we're not there yet. I think dev conf attendees will be even harder, since there's no such thing as authentication of dev conf attendees other than the dev infrastructure. It works now. What is done with the old hardware? What is done with the old hardware when it's removed from the data center? Is it sold on eBay or whatever? Not as far as I know. We usually give it away to hostors or other projects. This hasn't been done very much in the past, because we used to keep old hardware, and we have like 10-year-old hardware in production, but slowly we want to change that and have more current hardware. So, DSA seems like quite a large team these days, compared to what I remember in the past. Does that mean that you actually have enough manpower? So we're a large team. Not all of us are active at all points. So for the past year, for example, I wasn't very active due to some personal issues. We're always welcome more manpower, as anyone. Unfortunately, for DSA, it's hard to give DSA status to more members, because that means roots on all machines, and that's not something we can easily do. However, we can accept patches on our property repository. We can test them, review them, and push them ourselves. And there are also some low-hunging fruits that someone could help us with, like Python code, like UDLDAP, and rewrite of it, and so on. And that's much needed, and we haven't been able to push that yet. So, yeah, help is still needed, as always. I guess it comes under one of the later sessions today, but do you actually have a plan already for the virtualization stuff you'd like to use, or is that still a question to decide? So, we used to use Libvirt and KVM. We're trying nowadays Ganeti, which is something that will be presented later on today. We're still with KVM, and Ganeti is offering us a cluster management for our cross machines. So we've installed it already on a cluster in the University of British Columbia, and the new UDLD, for example, is being hosted there. And we're planning to use that more if it works well for us. Okay, I guess I was thinking more of stuff like Juju scripts to just kind of instantiate a service so that we could, at the drop of a hat, recreate the box that does blah very easily, make new buildies by just having Juju scripts for it, and all that kind of cloudy nonsense that people do. It might actually be quite useful if we have a problem with hardware that breaks. You can just set it up again, but I don't know if that's actually practical or not. So, with Ganeti, we can easily instantiate new VMs. We haven't made plans for something that automated yet, like Juju or something like that, or anything cloudish that would allow random users to create VMs, or us being able to create VMs easily. We're not there yet. It might be something that we need to consider for the future. I don't know. Just a short information from IRC from Martin that says that the budget for hardware replacement is 25 to 35 K per year, which is not the actual money that flows in, but the planned money that should flow out. So, if you have ideas for money to flow in, please contact the essay. As far as I read IRC, it's the plan. We have a spreadsheet that we made that has this number and a plan for the next five years for all machines. So, we took all of our car infrastructure, put a five-year plan, and looked at what we need to renew this year, next year, the year after that, and so on, and estimated the cost. This can vary a lot because of discounts and so on, but this is an estimate. This is a small part figure for the DPL. It's also been mentioned on IRC that it's not to be forgotten that most buildies are not on mainstream architectures, so it also creates sort of problems. Other questions for people who have the bien.net machine, an official one. What are the infrastructure that can be used by them to, for instance, authentication of DDs and stuff like that? This was explicitly discussed on how we could integrate SSO with Debian.net. Zobul is working on that. I think there is a plan, although there are some restrictions on what to do with that, because this single sign-on thing works with cookies, and you would want someone to be able to impersonate someone else on Debian.org. So it's a different security domain as far as I remember, but I don't really know the specifics. Zobul is the main driver on this, and he'll know better. And this is still work in progress, as I said. So it's being targeted at the development. Other questions? This is somewhat related to the Debian.net, but not completely. In that discussion, one use for Debian.net was mentioned that it's kind of an incubator of new services. So do you actually prefer if people do it that way on their own machines or someone else's machines, not DSA maintained, and just use Debian.net for these new services they're trying to get into the project, or would you actually prefer to have these things on DSA maintained machines from the beginning? And if yes, are you also willing to give people access to try out these experiments? So incubating services into your own servers and Debian.net domains is fine with us. So there's nothing wrong with that, in my opinion. Talking with DSA early, when you're developing a service that you intend to make an official service at the end, is good because then we can perhaps find some areas which are not acceptable for us or for the project, and talking about it early might help them... might help cast them early instead of working on it for a long time and then just dropping it on DSA. As for access, we've been fairly good at that, I think. Especially nowadays with visualization, we can create a VM and put the service there and give access there. That being said, there are many things that people work and don't really finish, so it's a big overhead for us to maintain a large fleet of VMs. So if someone is serious about something, sure, we can talk about hosting a DSA. Could you talk a little more about the single sign-on plans? What are you basing it on? So, again, Zobel is the one driving this. He's using a software called DAX, as far as I remember. Sorry? Sorry? Yeah. This is a web single sign-on solution that it's packaged in Debian. It is enough to install nothing too complicated. Look at Simul and it was a bit complicated. I've had some experience to pass with that. This is a simple thing that's using an Apache module, as far as I remember, that you can put in front of services and it will authenticate users against a central SSO. I'm sure that when we have something more concrete to announce, there will be a proper announcement with technical details as well. So, from Marcy, there is a question from Martin. What do you want from DSA that is not done yet? Nothing. We want more ponies. We do. I'm thirsty to that question. Other questions? Everybody's reading mail. Fundamentally, as far as I can see, the DSA folks are doing an awesome job. There's always more things that come up whenever I've had any dealings with the team. They've been very responsive, very helpful. There isn't really anything I can think of that I want more from just to say, thanks guys, you're awesome. Anything else? Any other question? So, in that case, we can thank the speaker and we meet in one half hour for the next topic about Xen, I think. Thank you very much.