 Okay, and this will now be the DSA buff that was going to happen in Menzies 9. In Menzies 13, sorry. Okay, so, yes, far away. Welcome to the buff. This is intended to be a buff, so we'll do a short presentation of the beginning to talk about the team and the changes in the team membership and what we've done over the last year and what we're doing in the coming year. And then we'll turn it over to the buff, and we can answer your questions and receive your comments and suggestions. And I'll introduce my co-panelists in a moment when I go through the team composition. So I just briefly went through the agenda. If you don't know who we are and what we do, we have a delegation from the DPL to do these five things, the paraphrase from the actual delegation text. We maintain the user database. We maintain the Debian infrastructure. We run a bunch of the core Debian services around authentication, authorization, email and some other things. We coordinate with our hosting partners and our service partners. So an example is UBC, where I work, or Fastly, where TALF works, used to work. And of course, we work with you to deliver the services that you want to have on Debian.org infrastructure. So this is who we were, and I don't know if he's in the room. Faden stepped down recently. So I'd like to thank him for his service to DSA and Aurelian joined us. So we suckered another guy in and he's been working hard on our stuff. So with me up here are TALF, Martin, Hector, Julian and Paul's hiding. Peter can't be here because he's defending his thesis this week. So soon he'll be Dr. Peter Palford. So looking back, the things that we've been up to, we've been on a bit of an M&A kick. So Bedel would like that, I think. We've been trying to merge the Debian infrastructure in. It's a bit of a slow going, so we'd like to ramp that up a bit. But some of that has already occurred and we have merged the Debian ports infrastructure in. So we've let go of some legacy things like leada.debian.net. And of course, we have our ongoing, every year we refresh our five year plan and we're in year one of our latest five year plan. So over the last year, we refreshed the security mirrors and hardware has been received to refresh the FTP master, although the function hasn't been moved from the old machine to the new machine yet. So we'd really like to thank HPE for that generous donation in the last year. Build the Porterbox and we received more machines for PowerPC ARM and for MIPS, a bunch of that has been deployed. It's some of the critical infrastructure necessary for those ports to be available or releasable. And from the service management perspective, really there's only two major ones that we want to call out in our slide deck. And that's the transition of the majority of our certificates from Gandhi to Let's Encrypt. Gandhi's been a great partner to us and to open source projects in general. So we should thank them for that. But of course, we're big supporters of Let's Encrypt. And so we've moved all of our expiring certificates over to Let's Encrypt and we'll finish the rest of them off. And Paul primarily with Lucas or Lucas with Paul, whichever direction is more appropriate in terms of the load of work, worked on the service guidelines that are available in the wiki and we'll go through a little bit of that here since there seems to be some confusion about why things should or should not be on Debian.org. So moving forward on the Mergers and Acquisitions piece, we're still interested in completing the Debt Conf work, the wiki, the mailing lists, et cetera. And it, given the various mail threads that have occurred, it would be interesting to work with service performance on replacements for Alliot. So people talked about GitLab. I talked a little bit about why are we doing both GitLab and Git, not in the sense that they're competing services, but why are we doing multiple things of the same flavor? It would be interesting to try to get a group of people together who are interested in continuous integration, source code management, et cetera, to have this as being a new service that replaces Alliot. Infrastructure Refresh, HPE, will be donating a very large amount of hardware that will be hosted at UBC. We call it the ByteMark equivalent. So it'll include an enclosure, some number of blades and a lot of storage. And the intent is so that we can have redundant services at either ByteMark or at UBC and not to have a single point of failure with either one of the hosting organizations or the other. Our next big ticket item for which we do not yet have sponsorship, so we might to buy it with Debian funds will be CDImage. It's a huge memory machine with large and fast storage. After that, we might consider doing FTP Master and CDImage again, meaning that both of those machines are enormously expensive and large, but also single points of failure. So FTP Master is more critical, of course, but having a second CDImage means that we could build enumerous CDs in half the time. And finally, we're interested in supporting the Build-D Porter effort, of course, and so some more arms, some more MIPS. We can talk about why we need even more. And some of the new stuff, for example, the Spark 64 potential donation from Oracle. And from a management perspective, we're working with a console server supplier soon to be announced that will be helping us get a consistent set of hardware for remote management across our various data centers so that we don't have to deal with different console servers. The next step of this, of course, would be consistent power management, consistent UPSs if we had those, et cetera, et cetera. But all of this to try to lessen the burden on us in terms of, oh, right, at that location, I have to dance on this foot and tap my head this way in order to get to the console. Finally, services. We continue to work on usager LDAP. I haven't done any of that in the last year. Paul found a couple of people to help out. And in the DEBCONT vein, we continue to do some of the video processing infrastructure. And some of it is not as ideal as we would have wanted it. So we're looking at ensuring that this can be processed quickly and can be stored quickly. Okay, so that's our last year and our year going forward. This is a bulleted list from the long text that is in the service requirements on the wiki. Really, the key points here are, from a DSA perspective, the earlier you engage us in your proposed service design, and the more effort you put into having a private and secure service offering that is architected well and even if it needs to use new software, that software you put the energy into making back ported packages, then the more prepared we are to run it on Debian org infrastructure. So we need three, three or four things, really. We need a team to be in place. It can't be a single individual. It needs to be using at least back ported, if not stable packages, because we want to drink our own Kool-Aid, effectively. We want the architecture to recognize privacy and security. So for example, we don't keep Apache logs, or we anonymize Apache logs. So we're quite interested in making sure that users of our services have their privacy protected. So if your service isn't architected that way, that becomes a stumbling block potentially. And you need to understand what your service requirements and articulate them early, because we don't have an infinite amount to sitting around. We are getting a lot of gear from HPE, it's true, and that will now make a bunch of hardware available, but it kind of goes like this in terms of feast or famine as to what's available at any point in time. So when I open it for Q and A, we can go back to the service requirements. Port requirements, there are some people who are a little bit confused about when I write an email that says we have some concerns as to whether or not that's a blocking concern. No, most of the concerns that we've expressed in the emails recently are non-blocking concerns, but there is a red, yellow, green set of flags for release architectures on purpose. So red is certainly a blocking concern. We have yellow concerns, non-blocking concerns, things that should be rectified if we wanna meet our objectives for building hardware. So really the amount to independence from any particular vendor, independence from any particular hosting provider, hardware under our control, hardware that is available or under warranty support or post-warranty support. The idea being that we shouldn't be dependent on a single provider of equipment or a single hoster of that particular equipment, because if they go away, if the relationship breaks down, a fire occurs, which has happened, then we don't have a way of supporting that architecture. So we apply that principle to all the services we run in debian.org, hence the ByteMark equivalent, and we are attempting to apply that principle to all the porter and building boxes. And so from a port status perspective, this has gone over a little bit in the release discussion earlier, but with a finer DSA perspective, we don't have any issues with AMD or i386. We have some issues with K-FreeBSD in terms of how the security archive works there, and that should either fold into the regular stuff or move to ports. ARM64, RML, and ARM-HF, we have some non-blocking issues and all of them are improving, fortunately. The number of machines that still require local support has reduced significantly. We have fewer machines that are on development boards and more on some production quality boards. We have insufficient hardware, and that might be a strange statement. What I mean by that is all the hardware is hosted with vendors. That's a second, the next moment, insufficient hosting locations, most of it is with two vendors. So if the relationship's ours with those vendors, then we're still in the same bucket. It would be nice to have some of these boards at other locations that aren't vendor-controlled. So this isn't to imply that we have a poor relationship with these vendors. This is me wearing a very, I'd like to be independent of any particular failure point hat, and that's why we continue to say that these are non-blocking issues because the vendors that are providing us with ARM hardware have been great. MIPSL, MIPSL64EL, against some non-blocking issues, they are improving, primarily the aging and buggy hardware that was at UBC has been replaced and is online now and includes FPUs. So their ability to build packages is significantly increased. PowerPC and PPC64EL, some non-blocking again, insufficient hardware and hosting locations, and S390, same thing, completely reliant on sponsored hardware. Now for PowerPC, IBM has been great as well, and we might get some more hardware there and find a new home for it. S390, we will never buy an S390. So there's no way to turn the S390 yellow flag into a green flag short of a lot of money. So this is how to contact us. You can send us an email in private, it just goes to the team members, or you can send it to us in public at Debian Admin. It's not archived, but there are other people on Debian Admin. Request Tracker, you can submit tickets for service at Request Tracker, or you can come and chat with us in the Debian Admin channel on IRC.of2c.net. I didn't put Debian.org there because I think there's an SSL cert issue for the sands. And finally, these are some references that you can go to to learn more about DSA or the service hosting. I didn't put up the ports. And that's it for our quick presentation because it's meant to be a bof, and I'd like to open up the floor to questions, answers, and comments. This is gonna be a short bof, okay. All right, so I want to go back to the requirements for services. So you said you want to dog food or stuff. So you have one set of things from stable and back ports, but there's a loophole there because you don't want the service itself coming from the archive, right? Because in that case, I need root. The service admin needs root to be able to upgrade. That's correct. We generally prefer people. You should have a mic. It should actually be, yeah. We actually prefer people to, at least for self, to have services that they come out of the, whatever the service provider is using rather than using, is it okay now? No, okay. No, it's not, it's mine. Should I, okay. Yeah, it should be better now. No, you're right. For anything we write ourselves or any we as in Debian write ourselves, we prefer it to come out of whatever Git or whatever version control system they use because the iteration cycle for going through the entire archive is slow. But any dependencies in Python modules or whatever, we really prefer that come out of the archive. Hello, we're here to be beaten up by you. So, let the beatings begin. Sure, it's only opinions at this point. So, he asked if I could speak a little bit more about GitLab. Yeah, I don't have a preference of one over the other. My chief concern is that I don't wanna get into the same position we are today with Alioth. So, we have something that is labeled Debian.org that isn't managed by us and effectively has a single service owner at this point. And if that person were to choose to stop doing that service, from our perspective, it's something that we would wanna shut down. We really do need teams to run services so that there is some health behind the service ownership and it can tolerate people coming and going as their interests wax and wane. So, any suggestion that we stand up at GitLab for me seems like a great opportunity for people to get together to talk about how to transition Alioth out. That could take years, but let's start the conversation about how we get people off Alioth and on to GitLab if GitLab is the one that we choose to go forward with. And I recognize that GitLab doesn't do all the things that Alioth does. So, that's part of the conversation is what are the things that Alioth does that can go away and what are the things that can't go away that we need some replaced with GitLab or add-ons to GitLab or compliments to GitLab? One of the things that GitLab doesn't give you is mailing lists. Do you think it would be possible to use the main Debian infrastructure for mailing lists instead of? With the respect to that I'm still having the listmasters head on, yes, file bugs against listdebian.org. In the past, Alex Wirt, who is doing most of listmaster nowadays is quite open to and quite fast on opening lists. Maybe we find a different way, though in 2005 or 2006, I think, there was the approach of having Teamstebian.net or Teamstebian.org, which more or less vanished. Yeah, yes, there was a proposal to move that into liststebian.org, but the software is mostly not used anymore only by very few small teams. So maybe, yes, we could talk to listmaster and see that we get even more listmasters because lists is also currently a run only by effectively one active person. And yeah. That said, it would be really nice to actually have the it be self-service rather than having to go and ask what's effectively a human keyboard to do something for you. So for example, I run Simpa at work and Simpa you can tie into LDAP and the creation of scripts, sorry, the scripts to create lists and purge lists are easy to make and you could hook that into an environment where the act of creating a new SCM repository triggers the creation of a couple of project lists. So there's lots of ways of accomplishing it, but that's the conversation that I'd like to see happen is Alias does these 10 things, we only really need seven of them, GitLab does four, how do we do the other three? Did you ever think about moving to a sort of more self-service kind of model for people running services? So like you have a cloud, somebody presents their service requirement to you and then you decide it's okay and gives them like cloud credentials so that they can run their own service or something like that? So it seems like sort of the DSA is more actively involved. And I don't run a service like, I've never proposed a Vanna service myself but it sort of feels like an outside team member. I tried to set that up in the last two to three years, but I was actually lacking in time on that and had also security problems with some of those services not properly talking SSL to each other, so in the end, I stopped working on that. Maybe if we find time or if we find a team to help us with that there, we could try to redo that. I'm open to that. Are you talking about open stock there? Okay. With respect to hardware, I think we really really should thank HPE for the very huge donation we are currently getting from both the server as a server set were provided last year, which we mainly used for replacing our security debian.org infrastructure. And on the huge then upcoming donation for the blade center and the storage shelves and the switches and so on what's coming which will be probably shipped the next weeks of the Debcon to Lucas place and then Lucas setting that up. Yeah, and there'll be a public press release for that. So we're waiting for that second shipment of hardware to arrive at UBC before we issue that press release and hopefully HPE will also issue their own press release. I think it's very good news. They've supported both this conference and DSA over the last two years in terms of the hardware. So it's a great relationship. Because this donation more or less doubles the CPU power we use for services at the moment. So that's a really huge donation. You guys already mentioned, alias in least masters as teams that need more people. Are there other services running on debian.org that need more people to help? There are very, very few teams which actually turn down offers of help. Even though they owner at bugs might not actively be soliciting for help. I'm fairly sure that if somebody shows up they would be happy to have more people help out. As someone running a few servers on debian host like auto builders, I'm happy for people who like to join. I think the same is true for all the teams that run things on debian. If somebody wants to join, can join can actually make things better. We are, I think most of the core teams are very happy if people join because beyond all of them what's there, lists, auto builders such as most of the same people. So if I look now at who is standing there, who was in the committee, who was the least team who was running auto builders. So it might be some quite some overlap. And it usually works the same for all groups. If you send patches often enough, even for DSA, you might get just the correct GID assigned and then just work on your own. That's what we, for example, did was Paul two years ago. Yeah, just to reply your question, it came to mind a couple of services like past tracker, debian.org that we have to shut down because there was lack of maintainer. And HTTP Redire, which is currently used, we really need someone to, if we want to keep it live we need someone to maintain that service. Not really a question. I mean, when I said that's not that I don't know that everyone wants patches, it's just that maybe we should be more proactive and let people know explicitly that the teams need people to do specific stuff and publicize that, use the communication channels to make people aware that there's other ways to help debian that really need them right now besides packaging and other stuff. I've said this at a couple of other ad hoc conversations around how I don't want to become the DSA process guy. But I am. But maybe we need the same kind of thing that we do for packages for services. Intent to package or intent for service, request for help, orphan. So that way it becomes much clearer and transparent to people what they want to build, why they want to build it, who's running it. We do have some space in the Wiki where people are asked to keep information about their service, small description, but also a way of contacting them. Should there be a need arise because even that stuff gets stale. So if you formalize a little bit more, I agree with you that that would be helpful. I've been working on lately as we got to offer to host service at the commercial internet exchange which also means that we then would probably run our own AS and run BGP on our own. I'm working on the technical details that might help us to get redundancy with one other currently also already operating hosting locations that we actually have then more or less the same hardware and two hosting locations quite close to each other and then getting even more redundancy with this hosting location. I'm guessing that the DSA is maybe not the right team to ask, but I was wondering about the bike shed surface is there anything going on on your side or more appropriate to ask another team later please step up. You'll have to ask the P-Monster team for that. And they, yeah, it's basically being run by them and they'll obviously have to talk to the want to build folks and so on, but it's so far at least it hasn't, there's nothing for us to do on it. So that said, I would absolutely love to have bike sheds happening before I die of old age. You mentioned HTTP Redire, what's the current state of things regarding murals, CDNs and HTTP Redire? Right now we have an experimental service called the dev.debian.org, which I'm going to speak about on Thursday. It's out there, it's running, we use it for significant parts of the Debian Olga infrastructure and it seems to be working well. HTTP Redire, as mentioned, lacks a service owner, it lacks somebody to actually maintain it over time. Preferably a team of some size. So exactly what we were going to do with it, I didn't know, but it's quite clear we can't have services that run for ages without anybody actually maintaining them. At some point they get, they don't, they stop working, they get security problems. Because what's happening is that DSA is getting the requests, why am I getting this service errors on this HTTP Redire service, which we actually do not run, it's just running under DSA controlled, on DSA controlled hardware, but it's not us running that service. .debian.net uses a CDN and you can get, sorry, .org, and it provides both the Debian archive and the security archive. And the debug archive. And the debug archive. And ports. And ports. So it's actually the most complete mirror you can have. From a redundancy perspective, we have two CDN providers that we can leverage, although we've configured this only on one, but we'll configure it on the other one as well. Is it going to deprecate HTTP Redire? Yeah, it could deprecate HTTP Redire. So then we don't need anyone to maintain it. We can just drop it. No, but if somebody were to step forward and want to continue running HTTP Redire, that would be good. But I think then we get back to the alley of versus GitLab conversation. Very because the services are much smaller and simpler. So the main reason we don't want alley of and GitLab is that they're both pretty big services. If somebody wants to run two minor services which compete with each other, that's fine. I don't really consider that the problem. I consider it the problem once they get to a big size they want, they start competing for namespace such as GitLab and all, for instance. Then we need to work at that somehow. But if you have some minor things, then that's fine. I want the CDN. Well, I think it's a great idea. Doesn't that make us relate too much on a third party? Because I guess that the bill is free for us now. But I guess if we have to pay for the bandwidth for security.Evian.org will be a lot of money. What's the concern? And it's why we have multiple CDN partners. And to a large extent CDNs are becoming a commodity. So as long as you don't integrate too much with the chosen CDN partner, then it's fairly easy to switch to another one. But yes, it's a concern. And it's one of the reasons why we still have also had the Meron network. And having multiple options here is fine. That said, our current partner for this is heavily invested into supporting open source and they host Python org and CPAN and various other free software archives as well. So they're built in free software even though the actual platform isn't free as such. And just as the names of these CDNs. Yeah, one is Fastly and the other one is Max CDN. Both of whom are active in supporting the open source community. We had a third that we stood up for a tiny bit for a test. We just didn't continue, but we could continue with the third. So we're believers in redundancy not only in the hardware and the hosting partners for the hardware that we run the services, our own services on. We're believers in the redundancy of the third party services to be redundant in our code. So for example, CDN or secondary DNS, we have multiple partners for both of those services. We no longer run the primary services for DNS ourselves. We've moved that to commercial hosting which we're getting donated for free. Which is working quite well. Anything that you as developers expect from DSA to happen in the next year, so we could just go back to sleep and... Okay, well, thank you very much for your time. Really appreciate it.