 Hello, everyone. Welcome to the Diodeboff. As in recent years, we are structuring this boff first in a short presentation of who we are and what we do, and then it will turn into an ask us anything questions and answers style. And even in the first part, if you have any questions to the specific things you are presenting, please ask. And look up who will be the first part of the presentation. Look up. So the team consists of ten members. We were nine just a few days ago. We managed three after two. One more. I feel badly for him. Thank you. So his focus will continue to be managing abilities, but now he will be able to do all of it, not just part of it. So in the room, we have Zobel, Weasel, Elk LaFauze, Habs, J.Cristal, and Zubi. And then, Paribol. Did I miss anyone? We've finished introducing. So if you haven't managed to connect a phase to a name until now, that's who we are, eight of us are here. So what do we do? We handle the core infrastructure of the project. That means users and accounts as you're familiar with, the machines that we have around the world, coordinating the hosting and service providers. We used to just have hosting providers, now we have service providers as well. Operating our core services like LDAP and DNS and helping to establish the services that you run. And then helping you run those services. Several years ago, we began the process of developing five-year plans. The intent was to refresh those every year. We haven't refreshed them every year, but we will this year again. But in 2012 and again this year, we enforced, we developed a set of principles by which we wanted to operate Devian's core infrastructure. So Devian's core infrastructure is delivered primarily on the AMD64 hardware. And primarily at six, seven, eight, depending on who you count as being included through locations. So our objective is to make those services as redundant as possible, both globally and locally within the data center. So some things lend themselves well to that. The static mirror websites, for example, lend themselves very easily to that. DNS service lends itself very easily to that. That's one of the service providers we use, for example. Other than this don't, those things that don't lend themselves to active, active mode, even within a single data center, we still want them to be virtualized so we can bring them back up if we have failed hardware quickly. So they're in an active passive mode or even in an active manual mode, but we can have a half chance of bringing them up rather than attempting to restore them from disk, from an archive. We haven't succeeded in the second principle so much in that we actually grew the number of core data centers. We like to try to reduce those again. Many thanks to ByteMark for their massive donation, but for example as a plus one when it comes to the data centers. We want them to be remotely manageable. I think that's a reasonable expectation. Not all of them are managed in the same way. And not all of them are remotely manageable at all. So we're going to be rounding that up. We don't do a lot of virtual networking per se, but we do some virtual storage on machines, but it'd be nice to complete the virtualization of our core infrastructure so that, well and truly, VMs can be migrated between hardware more painfully and even between data centers one day. As already mentioned, we deliver all of this stuff as much as possible by virtual machines. And the big one, because we're representing lying in the devian's annual budget, is we believe that the core infrastructure needs to be actively life-cycled and under-hardware support. So that either means buying equipment every year or seeking cash donations to a part or seeking hardware donations. But we're not really in the business of accepting random donations or donations of equipment that's antiquated and the business is getting rid of it and thinks that we might want to use it for form. We just have too much equipment now to do about it. So Bymark, just to pick on them, they were extremely generous with recent hardware and it represents a massive increase in devian's assets. And not, for example, something that we need to plan to life-cycle. There's no guarantee that Bymark will be able to duplicate that particular donation and we're going to need to have one day to replace that. So if we look back over the last year, we've operated most of our physical and virtual machines to Jesse. We've been on a bit of an every HDTS everywhere kick. So we spent a lot of energy moving services from HDTS to HDTS. You're probably aware that we're buying commercial certificates for this. Actually, we are taining commercial certificates for this that are kindly donated by Gandhi. And you know that Gandhi has made available certain benefits to devian developers. We started the deployment of an open stack environment to try to facilitate individual BDs standing up for VM. That would be DSA flavored so that as you develop a new service and are ready to migrate it over to a demo machine, it's in the same vein as what we would anticipate for DSA bots. And we've assisted with the development of new services, including SIP for real-time communication. And I mention that because Daniel is here to speak. Looking forward, we need to complete the transition to Jesse for the things that can be transitioned, so things can be what we do our best. We need to deploy a bite mark equivalent data center for geographic redundancy and partner independent. So again, I'm not, I am picking on bite mark, but they're an excellent partner, but we can't be solely dependent on bite mark for the vast majority of our services at this point. The increment is resident at bite mark and is entirely for our use and is a fantastic resource. But we need a second of those. What's the size? What does it mean, bite mark equivalent data center? Bite mark is a fully populated HPC 7000 blade system. So how many blades is that? 12. 16. 16. 16. 16. With several MSA storage arrays. Every blade has between 64 and 128 gigabytes of RAM. Each. Yeah? Yeah. Yeah. It's basically what half of a rack of their standard VM infrastructure that they gave to them. Great. And I was several years ago, and we've been living a lot of services there. We need to upgrade the released R-Cardware to data center policies. What I mean by that is some of the myths here is it's not as good as it needs to be in some of the ARM years, not as good as it needs to be from a remote management perspective. When it's up and running, it's building fine, but we'd like it to be a touch more robust if possible. And we need to decommission the deprecated architecture hardware, so that's Spark and IS-64. So as Jesse reaches the one year anniversary, that's going to go away. We would like to work with the mirror app teams to review and approve the VR program in the context of serve records. That's review records. So if you hadn't heard of us talking about that before, that's something we'd like to engage in actively with next year. We'd like to do a better job of handling the various keys that are used to secure our build. So the build V, the secure boot stuff, the archive signing keys, maybe we can get some HSMs and use that. Maybe we can have a dedicated signing machine that is significantly hardened compared to the best, although the rest are still hardened, something along that line. And we should complete the open-stack development over the next year. And we stood up, but it's not in production mode, an XMPP service, and I'm going to mention that because that happened during that time. So you can contact us in a variety of ways. We're on IRC at Debbie and Min. There's a public mailing list, Debbie and Min at lists. If you need to send us private, send them private, you can do that at dsaanddebbian.org. And for anything that's more complicated than can you make an A record, or if you want to keep track of your request, we encourage you to use RT to submit requests by mail. You have to do a little bit of a dance, and that's described in this usage approach. So it's a short presentation and we want to leave the majority of the time for questions and answers. So I'll turn it over to you for questions. No questions at all. So what about the end of the new mailing list? It's not really public. It's got a lot of different types. Are there any plans to make it public in other factories? The part we had already made in public if we have been thus far, we've been remiss. The idea was to make the archives public on a particular date forward, because people that had mail to it previously shouldn't have their stuff exposed. So that's not going to do that. I think, or at least we were supposed to make the archives a master group, and we decided two or three years ago that we would make subscription open to every Devon developer. That's just been implemented yet. So I think the plan is to have like Devon Private subscription with building LDAP, but given that our LDAP infrastructure is in flux and needs some updating anyway, this has plugged on that. So I should have public-ish. So when we could reuse help, which is mostly isolated to what we usually do, because Luca has put a tremendous amount of work into re-implementing the current user, the LDAP, by some sort of Python, a Python library, so we get a better view on the current LDAP. It's actually layered and has tracks so that you don't do anything too stupid. So if any of you has good Python knowledge, help Luca, we would like... I understood that you... The use of your part that runs on dbdev interval, it consists of two different sets of tracks. One set is what we use to add things to the database. It consists of the mail gateway that you all use to submit, for instance, an edge case. And that part has been rewritten already and should probably be deployed so that we can actually kill off the remaining bugs that are still in it. The second part is the web interface. You can see when you go to dbdev.org, and it's a set of crawl scripts, I think, that sometimes shall out to bash scripts, depending on what you need. And all of that should also be re-implemented in Python, presumably layering on top of the API you already provide for checks of various things. So user-dure-ldap and user-dure-ldap-cgi, the two packages currently use, and the intent is to merge the two to UD. User-dure-ldap is mostly rewritten in UD, and it's written in Python and Django because the intent is to merge in the web interface, and so if you have knowledge of Python and Django, that would be helpful. Will you release the packages as well? Sorry? Will you release the packages as well when stable one day as well? Yeah, that's it. The UD is in git on github, because that's where I was putting stuff, but we couldn't put it in very long. Yeah, it's visible today. To take on Let's Encrypt. Let's Encrypt again. Yes, please sooner, better and possible. Yeah, well, it... I think Let's Encrypt is a great effort, and we will switch to them. I feel badly for an organizational company who will have their revenue stream cut, which is a lovely, very significant advice. Let's Encrypt. Gandhi's been good to us, and so I appreciate what they've done, but that doesn't mean... Well, we would have a choice at that point. Gandhi has offered Debbie and Free Certificates for our services, so we could continue using them more. We could do less Encrypt at the bar. If the measurement is whether or not it's free or not, then we could use either. If the measurement is whether or not we want to support Let's Encrypt from a philosophical perspective, but I don't know if DSA has a strong opinion about the philosophy of Let's Encrypt versus Gandhi. Our philosophy has a lot more to do about HTTPSI data. So in that sense, Let's Encrypt makes HTTPSI data much more feasible. We use that very actively. I think the work required to obtain a certificate will be way lower over Let's Encrypt, and so that just might be the most wishful thinking. Is there anything that you as Debian developers would like to see to happen from all that the DSA should work on which hasn't been done yet? Yeah. Nothing to do. So we are doing a good job. We finally get the job done. There's this one thing where there's a buff about tomorrow's Jenkins-Debian org move, migration, or whatever. Yeah, I think that is well ahead. Well ahead, but people are interested in that might come tomorrow. Do you want to summarize the idea? Move the current Jenkins-Debian that set up to Jenkins-Debian org and have all the jobs run on nodes and on the main Jenkins machine? Yeah. At the moment, I plan to move the current jobs, and I don't, I rather design the jobs so you don't need that UI access to trigger them or something, so I would not use that access. You can do anything on a new Jenkins-Debian. Yeah, so part of the goal is to have more QA happen directly on Debian org infrastructure, and this way we can also not only provide and use AMD 64 machines, but we have also, for instance, now access to PowerPC systems, because IBM has generally given us two PowerAid machines, and so we can spread out testing to more architectures. What's the status of this? Maybe PDAs? I don't think anything on that is blocking on Debian Admin. It's FTP masters and presumably build-demons job to actually get that moving. I've been wanting to have that for years now. Maybe we need to provide some incentives for FTP masters. No, you don't. You do have enough. It's actually going a bit forward. Not very fast, but it's going forward as long as the stuff comes out already. Great. As it seems that we have people from all over the world here, how is the mirror infrastructure working for you, especially in countries or continents like Asia and Africa? I would be interested in that from these eight perspectives. South America as well. And there's Australia, which doesn't have any internet connection. Is it working for you? Or has anyone problems? The status hasn't changed since last year, even though there is a security in Japan. Okay, so Asia would need to improve when it's done. So you're operating the morals as well? We're operating just the security. There is many administrative problems in China. If you want to host a machine in China, last year we talked about buying a new machine and putting it in. And after checking with several data centers, hosted by even those who serve in this trade, the universities, they said that the domain needs to be registered with the government for the program. So they won't host a whole set of machine for you, even if they have the resource for the... So do you need a contract? Not a contract. We need to register with the Chinese government if we want to provide services there. Yes, you need someone to take the liability for if you are serving some, like an IDOT content or something else. Trying to hold our own mirrors in every single place in Europe is something that we can't be good at. So I would really like to see us moving towards using third-party content delivery networks. And several of them have offered to help us with that for both the main FPR as well as security. And then these CDNs might have presence in Australia more part of Asia, South America, Africa. Yeah, and if the metric is tying to first byte delivered, our DNS service is now hosted by three different service providers, so that if a relationship with a particular service provider goes sour, you know, have the other two. And all three have their DNS anycasted. And if you use a content delivery network that is also anycasted, ideally the first time to byte of an NFT, get updated or upgrade should be very fast. It doesn't mean losing control. It means leveraging companies that are willing to support us. But it does mean having a conversation about how it afterwards and the service records and et cetera, et cetera, et cetera. I think in Asia there would be not only a poor China, but for many other countries whose bandwidth between countries are limited. That is the reason why we have set up a mirror in Japan and in this way it isn't better. Yeah, so when we select a CDN partner... And there is not a CDN partner that can cover Asia well. None? Fastly, not CDN? No. Fastly has Bob's Asia. Fastly has Bob's Asia? They even made a deal with a major SP there. How come I also... How come I... So, we should test... Also it's not true that boundaries between countries are limited. Boundaries... It's not true that boundaries between countries are limited. It's not true that boundaries between countries are limited. There's not... There's not government control, so there's no government control. You may have cross-country traffic, which is bigger bandwidth than inter-country. Inter-ISP traffic problem. I'm going to use your question on taking one machine just in time on the other side for separate dates. I don't have time to do this. Well, actually as far as I am, as George Kennedy for some parts already could do that, but we didn't implement that. I don't think QMU supports continuous migration. So, like turning it off on one side and turning it on the other side is not possible. Yes, you can work so great, but fire livers are probably not. That is like migration. So, you can... That is what migration is like. It's a once-off rather than continuous migration of the brand at the time. That is what like migration is, as far as I... I thought it was... There were some that I'm not sure they've heard, but then George makes a machine that's really slow, because you have a brand that gets that. I'm curious about the open stack step. I don't think I understand why you cannot use very existing energy infrastructure to provide VMs to... I looked into the several front ends for... Start with this. What we want is for developers to be able to create their own VMs and administer them. And we want those VMs probably to be short-lived you know, three months, six months, by default, so that they're useful for developing and not... We don't see them as the place to run services for years. So, what we're going to do is that developer wants a VM just issue an RT ticket within two days. That's what you are going to deliver currently. I mean, why would you... I'm not sure there's a need for front ends. There's no way to get a VM immediately if you have a way to provide them less than a week. Part of the problem is that within our own connection setup, only we have control to stop and start services for VMs to access the console, to access their VGA output, and all of that is necessary if people want to administer their own systems. Also, we may not necessarily want to mix trusted and untrusted VMs in the same way as correct. I made progress yesterday with Thomas working on OpenStack. I hope that we'll continue the next weeks because I'm leaving today with no more direct live debugging together with Thomas here at .com but I think we are making progress in a way that we can set up the Kilo release It's about the point but I would also like to see how OpenStack works for us as the VMs. Also, while asking for a VM may be just a ticket for you, it's still an hour or two of work for us and if these request one every week or more then it just doesn't scale. Questions? Are you still looking for more help? Are you still looking for more help? Always. That's not me volunteering it's a trigger you to tell everybody else you're looking for more help. Well, certainly I'm looking for help on UD so somebody with Python and Django experience can help kick-strike that well. Also, if we have experienced puppets users, administrators patches to our puppets repository which is public are always welcome so let's go also in small chunks. Well, we recently had patches like 800 kilobytes of ice in the pie. Beautiful but hard to read. What do you tend to do with the bash things? For example, I would extend some Python stuff around and it's one way. Now part of it is it's old and ancient horrible code base, it's mostly based on Elmuproll Post 4? Yeah, okay. We keep everything post 4 so we have half time every time it's just old and old. And it has grown over the years to support more and more pictures and this might be the chance to reintroduce some sanity. You sure you want to use Django? Nothing in Django, it's been horrible because they change their thing every six months and we change every two years. We have to migrate to all the upstream releases but they don't change each time. People are leaving from 1.4 to 1.7 and stuff just explodes and doesn't work. And you have to go and download Docker images or something and then it says oh, you need to change these things. Then you go to 6 and you change these things. I've spent a long time trying to migrate each year to a new stable Django, it's a bit of a pain in the ass. It's quite sad though. It's really weird. Exactly, it's web people. Just run Django from back post. It's running fine for me. Yeah, just use an old one. Oh my god. That's way easy. Not really if you want to use real features. So I think for UD the first step is the way you look at the user-girl that works is that it generates a lot of small files and then pushes those small files to each machine. So each small file is specific to each machine because the sudoers file is different for each machine for example. And this is a very synchronous process and all machines get updated at once and you make another change and all machines get updated at once. And there is no there's no strategy really around all of these many, many files. And there's a lot of duplicate information in those files just because you happen to want to stick stuff in sudoers versus etc. So step one is to reproduce the functionality we have today in Python and Django still producing these many files but having abstraction layer over LBAP because there's a Django module that exposes LBAP as if it were standard ORM. And so at that point then you can use any of the ORM features that Django provides even though we're storing LBAP. And then we have choices we can make. Do we still want to store all this stuff in LBAP or do we want to store somebody in a database do we want to produce these 20 files per host or do we want to maybe reduce the number of files per host but do some processing on each host. Do we want to do it by leveraging some messaging while just for triggering? Yeah. What we currently do is once we see that new entries have been made into the LBAP we trigger we are using RabbitMQ to trigger the hosts to connect to db.dev.org to make this new content. But I'm not particularly narrowing to the files on Django It's quite clear though that it's going to be a mismatch between the way they want to change things and the way we change them it's going to be painful but I don't know, there's a better answer. Any other questions? Alright, then we'll give you 15 minutes back to you. Thank you for attending.