 Hi, everybody, and welcome to our cloud images box with Thomas Volga. Thomas is a Debian developer who has worked in the OpenStack team and Python inside Debian. So please enjoy it. Hello, everyone. So let's first explain to everyone what the cloud team is about. So our role in Debian is to build cloud images that we want to ship to our users so that they can use Debian in the cloud. This includes posting these images in some of the major cloud providers as well as just providing them as a file so that you can use them directly. So we have a bunch of topics on the other pad. We showed the first little perspective of what happened last year and this includes talking about the accounts. So we need the cloud team. We have accounts so that we can publish images on the biggest cloud providers to name them Amazon WebSend. This is Microsoft Azure and Google Cloud Engine. So, Ross, can you tell us a bit about what happened to the accounts over this last year? So we've made some progress in moving the account ownership under a little bit of a better structure. We want SPI to hold those accounts so that they're not attached to an individual. The last I heard, the big setup on AWS is we've moved a lot of the stuff into the SPI-owned account. But, Noah, I think the last I heard from you and David was that some of the other setup for the vanity configuration was still pending internal Amazon changes. Is that correct? The big issue is the additional accounts. So the ones that we're using right now for publishing our daily images and our release images, those are set up and generally working the way we expect them to work. We want to create... We have a number of accounts that we've already created. We want them to be associated with SPI so that we can use them for other things. Like, right now in the very old, like, non-SPI-owned account, we have a bunch of things related to like ci.debian.org and that sort of stuff that run. It would be great to get those out of there. That old account also hosts the cdnaws.deb.debian.org, Cloudfront Mirror and that sort of stuff that's run by the Mirror team and by, I think, James Bromberger is still involved in it. It's his account. It'd be great to move those things to their own dedicated accounts. It would be nice to come up with a way to use some account to grant access to AWS resources for Debian developers who want to do, you know, want VM access for bringing up new services or that sort of thing or just for doing, you know, having a scratch VM to use for getting access to a boatload of compute resources for doing a build. Like, you know, having access to that myself through my internal AWS account, I can tell you that, you know, doing things like testing changes to the kernel are a whole lot easier when you have 100 cores and super fast local NVMe disk and things like that and you can do a full kernel build in four minutes. Like, that's pretty cool. We could do, we'd use that sort of stuff for a lot of stuff and it would be nice to be able to make that, those resources available. So right now, like, you know, we can't do anything with that because it's just blocked at the AWS level and, you know, the goal is to fix it, but it's not fixed yet. So feel free to nag David Duncan. I'll add his name to the, I'll add his email to the, like, the Jitsie chat or something like that and then everybody can nag him. That's the best way to get things done. I think that, you know, the key point in here is going to be actually migrating out CDN stuff and everything was the same managed, right, from this account because everything else, I think can be quite easily scrapped off. You know, there is always somebody who can do it, something about it. Those two things are probably the most important parts. Yep. And on the other, the other providers, I don't believe that the state has changed. If I recall, it's been a while since I looked at the GCP status, but I believe on GCP, there is already an organization that is owned by SPI that hosts the Debbie and Claude experiments project and I actually don't know what else is in there. Does anybody know for folks watching what the rest of the resources in there is if they're more up-to-date info on the status of the GCP ownership? To be fair, I don't have probably up-to-date information, but last time I checked the account which I was playing and I was able to access wasn't under SPI, so I don't know how it's been done. Maybe Valdi has some more information about it. I have some information about GCP that isn't anything done. We have the possibility to assign accounts to SPI. We use that for Salsa, but we don't have anything for Claude team, Claude stuff, image stuff. Yeah. So probably this is something which we have to talk with Zach about. So we've got some resources for our own use case, right? Yeah. I think I have a better contact at DefRail, the team that manages the funds. Let's see. I haven't talked about generic money for stuff like Claude team and we need a better way of dealing with, they need a better way for dealing with sponsorships for projects like Debian. And just for the audience, the account thing with Microsoft has been dealt like a year ago or something, right? Microsoft publishing was fixed before the Buster release. You have a first party publishing account under the name Debian assigned to SPI. SPI is a Microsoft partner. Yeah, and we use that. I am surprised that the entire room didn't just burst out laughing at the sound of SPI being a Microsoft partner. That's one of the weirdest things I've heard in a long time. Yeah, me too, but I think everyone is muted so they couldn't. And for many of us, right, we met at Microsoft not that long ago. So perhaps I'm just used to the strangeness. Yeah, yeah. Great, I think that's it for the account retros. Does, before we move on to the next item, is there any other things on that that we should discuss? We don't have any money or accounts on Azure for cloud stuff. I asked them a while ago, but never got through because it works better than Google, but still they aren't able to create billing account that can't be overdrawn. Which means that we basically don't have the ability to test our images that we publish. And I think no, it's gone. Yeah, I think we lost right in the middle of that. And we lost one of these who have answered the question. Welcome, Bug. Yeah, I don't know what happened there. I don't know if anybody caught the last thing I asked about running and accessing resources in Azure. Does that mean that we don't have the ability to run Azure instances so we can test our changes and the sort of thing or what's the status there? We have a sponsored subscription that's assigned to a gradative, AKA the original one where we can, we could use, but we haven't any setup to actually do it. So I think it's a similar situation with GCP, right? We've got this project which Google created for us and for themselves to experiment with. So in theory, we could use it, but it's not under our control per se. So we cannot ensure that everything was done in there. It's really dependent on what we do. Yeah. Shall we move on to the cloud unit upload? Maybe Noah, you can talk about it. Your demand. Yeah. Yeah. So following a, this is something that we've been wanting to do for a very long time, which is update some of the cloud specific packages in a stable release. Largely, this is because the cloud platforms are fairly quick moving and we want to make sure that our users who are running on these clouds have access to the latest functionality that they expect access to in the cloud. So, you know, a lot of the... Excuse me. So for those watching, what happened is that on the last release of Buster, we updated cloud unit to version from version 18 to version 20.2. Okay. Yeah. I'm getting to that point. I'm giving some background. So cloud unit was the first package that the first time we actually decided to update a stable package that was cloud focused in. And the idea there was the new cloud unit had support for a number of new features as well. You know, it also included a bunch of bug fixes that were normally would normally have been considered for a stable update anyway. It was the new features that were the bigger consideration. Among them, AWS has a new version of the instance metadata service. The thing that listens on the link local 169.254 address and serves a bunch of information over an HTTP interface about the instance. They have a new version of that that adds a few security features and things like that. We wanted to make sure that was available to our users. So after doing some testing, we worked with the stable release team to get a new version 20.2 accepted into Buster 10.5. And that mostly went well in that the common use cases are just fine. It did have a couple of regressions that impacted some people. I think the biggest of those regressions was the generation of what was, how network configuration was generated and in particular what the files were named. And the issue that we really ran into, I think to me what this highlighted was that we, Debian in general, is kind of inconsistent in what we put in the slash ETC slash network slash interfaces file in terms of how we source the interfaces.d directory and the files in there. There was some inconsistency between what happens with Debian installer, what happens with the if up down package, what happens with the cloud images that this team publishes. And there were certain cases where people who had an existing configuration were building their own images didn't get files created that were actually sourced by if up down when bringing up the interface. So that was obviously confusing for people. The one provider that contacted us was one called Hetzner Cloud and they had to make some changes when they adopted Buster 10.5 to adapt to the cloud init change. They were surprised by this. They've managed to fix it on their side, not normally the sort of change that they expected to see in a stable release. And so that was sort of unfortunate. And I think that's the major issue with cloud init. I don't think we've heard any real negative feedback about it, but it doesn't look good when the very first cloud related stable feature change that we've made breaks people. So I don't know what we need to really do to prevent this sort of thing in the future. I kind of think that the cloud init change is ultimately the right one to have made. It's just a surprise within a stable point release. I hope that Vanessa learned from Hetzner that they should use our images. Maybe, or at least one of the things that we did was we did put out a call for testing pretty early in this process. And we did try to make it visible that we were going to be making this change in a stable release. And we did try to give people the opportunity to test before we did it. So hopefully people are paying attention and are looking at changes that are coming up. Maybe we should do better about trying to socialize this sort of stuff and trying to get it out there and really talk about it before we make the change. But yeah, I don't want to put the blame on them because this isn't the sort of change that normally comes out in a Debian stable update. So it is a little bit of a surprise. And I think maybe they could have done things differently but I think we should have also tried to understand a little bit about how we could have broken our users. And we did a pretty good job of testing the environments that we all run in like AWS and Azure. Like we did a decent amount of testing there. I think at least the common open stack deployments using our images were tested fairly well. So that's the vast majority of the cloud in it usage, I think. But so it's great that we didn't, that that all worked at the common use cases were supported and unchanged, but it's still unfortunate to break really anybody. I mean, we're not, we're trying to avoid that in all cases. I'm not negating anything what you said, Noah. It's just, for me, it's just the case. Basically it's a, well, maybe I shouldn't say niche case but that's what's happening, right? We are not capable of testing on everything because of the manpower, because I feel a bit to the platforms. So unless we've got engagement from the cloud providers, ideally, then we are not capable of ensuring that everything works everywhere. Yeah, and one thing that is probably worth noting is that this really didn't actually break any existing instances anywhere. As far as we know, it broke, like Hetzner's publishing their own Debian images and they were constructing the configuration in a particular way. And that broke when they tried to publish 10.5 images. It didn't break any existing instances or anything along those lines. So I think in that case, the impact was relatively small. They were able to adjust their configuration and publish images that worked just fine using the newer cloud in it. And the level of effort involved in fixing this issue on their behalf was not really high. All right, shall we move on to the next topic? So shall we move on to the to-do for bullseye? Yeah. Because we've got like 20 minutes remaining in five. So shall we directly talk about system D network D? Who wants to start? So the discussion is shall we keep going on with effect down or do we as a team believe we should move to the network handled by system D? I think there are a number of unanswered questions about how system D network D works. We don't know if we can replicate the behavior that we currently have in our images today using network D. Last time I looked at it, it didn't handle the case that you see today in at least in AWS VPCs and I suspect in Azure as well, which is that IPv4 is always on, IPv6 is optional. And there's really no way to know ahead of time, which if you're running in dual stack or if you're running in v4 only, and it was hard to make, I never found a way to make that work in network D, but this is a while ago and maybe things have changed since then. We were able to make it work with if up down and DH client, but it's a little bit of a hack. And it gets even hackier if you look at the merge request that I just opened yesterday or so on Salsa for adding policy routing for these environments that hooks into DH clients, enter hooks and exit hooks and does a bunch of configuration of routing rules and stuff. If up down and DH client are super flexible and they let you do a bunch of stuff there, I don't know about doing all of that same stuff in network D. So we should do like a deep investigation of that and try to like prototype it or something and see if we can do it. And you know, without doing that, we really can't answer the question of whether we should move to network D or not. Like it may be possible, maybe really difficult, maybe easy, I don't know, we should figure it out. There are also issues, I guess though, of clouding it, not really working with network D and not doing any of this stuff that not configuring it, it expects if up down. Which by the way is orphaned and I don't, that's a little bit scary to me. What, you said down your phone? Yeah. Oh, scary indeed. Yeah. And it's still default in the main image, right? Yeah. Yeah, so it's going to be fun. It's still default in Debian Dev installer uses it. Yeah. I don't know, like I've been looking at that WMPP bug for a while, it's probably still open in a browser tab here, I don't, I'll help if anybody wants to adopt if up down, like it should not be orphaned, this is kind of an important package. And maybe if, like I know we have some feature requests that we'd like to see implemented that make this like optional IPv6 configuration, like easier to work with if up down, maybe we could implement that if we were maintaining that package. I don't know, but I don't want to, definitely don't want to do it myself. I played a little bit with the network and the optional IPv6. I think it actually worked. You can configure it to try auto configuration and at least on AWS you get, I think a managed config flag back and that will ask it to use the DHCP v6. I think that actually works. Okay. Do you have any thought about how the policy routing would get set up for secondary interfaces? Not yet, I have to look into it because policy routing is crazy shit. I was fascinated to hear that's a thing in the cloud. It could be that we could pull some tricks Noah with either additional drop-in units that we run after or I don't know if network units support any of the sort of like pre-post script features or if there are limits there. In general, on a number of systems, I'm using system D network D and it works very well, but in all of those environments, I know ahead of time that it's dual stack. So I don't have the central problem that you've mentioned. I cannot add anything to this. I'm not using network D at all anywhere. And of course, there's DH client and if up, down they do work. So I don't know, maybe there's like, we should decide if there's actually any benefit to making the switch right now. Does it make life easier for anybody if we actually are running network D? I don't know, like on some level, if we don't need to change anything, we probably shouldn't, but if there's no future in if up, down, then we should be working harder on that. I would expect that if there is no maintainer for the package in the Debian itself, at certain point, some people will start pushing network D as default, right? And then we will be probably put in a situation that we will have to make work in network D or we have to maintain if up, down and everything around it on our own. Yeah, but that hasn't happened yet. I mean, there doesn't seem to be any move towards network D as the default for Debian more broadly, at least that I've seen. As far as I know, I don't think that's possible. Last I looked, network D still won't do Wi-Fi configuration. So it would be a pretty weak default for lots of people. It's okay. I guess one other question about moving to network D, what happens to other things in the DH client hooks? So do we, if we look into network D, do we also have to look into using system D for time sync and other things that use the DH client hooks? I really don't want to move to anything other than crony. If in crony, it's super nice. Yeah, that's a good question though, Ross. I think we should understand. I don't see any reason why network D would have to bring in time sync D. I, but I don't know, we should make sure we know that for sure. All right, so the conclusion of the discussion is, if somebody wants to try and can experiment, see how it goes, and that's about what we know right now. Right? Yep. Okay, so what's next in the agenda? Yeah, I don't think we need to discuss here about the thing for the script for updating. All know we need to work it out, right? There's no controversy, just somebody needs to do it. What is the script that you're, what are you actually describing and what do you, what's the utility of it? So there's a check open stack date on the other part, right? So that script is used to compare what was in the old open stack image with what is in the security or positive three. And that way we know whenever an image needs to be updated or not. Yeah, I'm not convinced that we need that. So I feel like for open stack, so we have the notion, we have our daily images that we build in Salsa. And then we have this notion of our release images. And the release happens manually and it's based on a daily build. And we, that's a decision that we make when a point release is issued or if a kernel change is made or some core package like that. I don't know, and a release, whether it's for a point release or for a package change, makes new images available on all of the commercial clouds as well as for open stack. And so I don't think we need to like actually dig in at the package level to the generated images and try to make a decision in terms of what to do with a new image. The idea is to have a new image each time you have a point release and each time there's a security, there's a package of data in security, then obviously you need a new image, right? Because you don't want the security. Not necessarily, right? I mean, not every, so if it's a kernel change or something that requires like a reboot in order to take effect then definitely. But cloud in it has the ability to apply security updates at boot. So you don't necessarily have to bundle every security update into a published image. For small things, like there was a recent thing that I think was in the default image, the LibJSON something or other package that I think was, I didn't see any point in publishing a new image to pick that package change up and making all of, initiating that amount of churn in all of our user environments when cloud in it can install that for you and you can move on with your life. And I think we also enabled unattended upgrades on all our images. So this will be done with this package. Yep, yep. The one thing I, one use though that I do have for the script that does compare the contents of the images, it really would be nice when we do publish new images to have a very clear list of what is new in that new image. So right now when I've updated the release image on AWS, I go and edit the Buster Wiki in the cloud name space on wiki.w.org. And like I mentioned that I released a, that we released a new army to AWS and that it can includes a new kernel for some DSA or something like that. Not having to actually go and look at the security mailing list archive and all that sort of stuff and get the package details and manually edit that into the wiki would certainly be nice. But so having something that would look into doing what the script was, give us the result and then we decide whether or not we'll publish a new image. That would be the idea, right? Yeah, that's, I think that's pretty reasonable. But can't we just diff the money fests? We have the JSON files where all packages with the versions number are in. So seeing what has changed should be much easier nowadays. Yeah, I think so. I think it's a case of someone has to implement it. Well, that's good though. It sounds like we agree it would be useful to have at least the notifications so that we could make the judgment. So I kind of started the script and never finished. I'm sorry. So let's move on to the next, what's that exclusive thing or included? Ah, yeah, about containers and such. So Reynaf maybe you can... We have a question from someone. Could I share with you the feedback? Sure. Okay. Is there a scope for talking to other distros, for instance Ubuntu, to get common viewpoints on system D, network D and so on? I haven't tried to talk to anybody about network D, but it is something that we've done occasionally. I know Noah talks to folks about some device naming issues that came up recently. After the IO scheduler back and forth, I did some survey of other distros and how they're handling those scheduler settings. I haven't posted that, I just did that last night. Sorry guys. So it is something we do occasionally, but we might want to ask around about network D. I don't know if anybody is using it on other distros. No, I don't know either. I know that on AWS that Amazon Linux does not, they use DH Client. So Ubuntu images are using their fancy thing that renders for mutual things, right? How is it called? Netland, it's called. Yes, thank you. Right, so, Rena, you joined because you are interested in having support for container images, made by the cloud image team. Yeah, kind of. So I joined because I recently got interested in packaging Docker alternatives. Basically, for the sake of this conversation, it is Docker while I'm most more interested in portman and the resistor tools. I used to work a couple of years back with Thomas on FI and I'm familiar with that tool set and I am very excited to learn that you're using that tool set for writing, well, the cloud images and I wanted to explore possibilities and ask for your opinion, can we use these images as minimal baseline for container images? I mean, this conversation that we just had was a lot about network configuration, which is something that wouldn't be required in the cloud images. Nevertheless, just from the naming here, it seems to me that something like cloud might be in scope. So basically my question here is, where do you guys see the biggest challenges for being able to adopt base images or to provide some officially sanctioned base images for use in Docker and portman and other OCI compatible ROM times? Also, we have Ray Grant in two forms, in VM form and in container form. So at a high level, the Debian images that are available on Docker Hub are not built by this team, they're built by other folks and they've not wanted to adopt our tooling. In the opinion of a couple of people on the team, at least that's a reasonable choice and not necessarily a problem. We've gotten a couple of requests recently about extending the cloud, the cloud build tooling to cover other image generation. There was a thread on Debian DeVal about this a little while ago. And my opinion is, I think that that could be reasonable, if we're providing a good platform for building images, then if people want to use it for the things, that's great. But so far, the only use case that I've heard is shouldn't we build Docker images out of the same thing as the cloud images? And I think the answer there is I just don't see any reason why the current setup is a problem. Those folks are doing their work in a different way. They have, in my opinion, good reasons for it. They use different tools for good reasons and they would prefer to keep it that way. So I think the answer is yes, but what's the use case and who's gonna do the work and do they want to use our tools? I think one big sort of sticking point that isn't going away is that people would like there to be Debian official Docker images. And the current Docker-based image maintainers don't seem to be interested in the new image. They need to build their images, the requirement that their images be built on Debian-controlled, DSA-controlled infrastructure. I don't know if it's specifically DSA, but Debian-owned infrastructure. And so the images that are currently being published to Docker Hub are not eligible as is to be officially Debian. I don't care about that, but it seems that other people do. And unless the current maintainers are willing to change what they are doing, then the only way for Debian official images to exist is for somebody else to do these builds. I'm happy with the images that exist today using Deburo type and the tooling that they have. But if people really aren't, then there's something to consider there. One thing I just want to add for folks who have not looked into the tooling there, those images are built from data on snapshot.d.o. And my understanding is they are bit for bit reproducible reliably. So one thing that is a little, right, the letter of the law requires official Debian images to be built on Debian-owned hardware, but it's very difficult in this case to figure out what the benefit of that would be. You will literally get the same image from anywhere. So it's a little bit of a hard sell to somebody to say to take this working process that millions of people are using productively, make a bunch of changes to it, and you don't get any additional benefits except sort of a rubber stamp of approval. At least to me, that's a hard sell. I think you summarized the situation very, very well. The seal of approval is one way to look at it. The other way to look at it is look, we have a cryptographic key infrastructure where we can establish chain of trust from the Debian FTP archive key, which is not the case for the Docker images that you have at Docker Hub. That was the thing that irked me a lot, but I mean, you're totally right. The technical infrastructure that these images have been built with this Deboerotype, the fact that they're reproducible, that is a compelling argument. There's serious engineering work that went into that and frankly, they work quite well these images. Maybe that can be approached from a more political side where there's some recommendation or some blessing or some statement from the cloud team that could mitigate that concern. I don't know. I question whether that statement needs to come from the cloud team, given that we aren't part of constructing Docker images today or container images today. Who cares what our opinion is on the container images? We're the peanut gallery as far as anybody's concerned. That's more of a question for trademark and PL because they set the requirements, which are not handled by them. Yeah. So there is a bug open right now about this topic. I think it's assigned to general. It was assigned to the cloud for a while, but I kicked it over there because it just doesn't belong with us. If anybody does feel strongly and wanna weigh in on it, that bug is probably the right place to talk and make sure that DPL and the trademark team are involved and aware and follow up with them if you wanna pursue it along those lines. I want to know if it's interesting to have Docker images for the image that we already built so we can test them in a container way or not, it's not good to have them. I mean, we don't need to publish to Docker Hub, we just need to have them in Salsa. I don't see any benefit in that because the things that we wanna test are not the things that are going to be tested in a container. The things that we wanna test are, you know, are we setting up our partitioning, right? Are we setting up our kernel command line, right? Are we, is cloud in it doing the right thing? Is our network interface configuration doing the right thing? Like that's where we're making changes to Debian. And I don't, you know, that's not quite, you're not testing the same thing if you test that in a container. Like you can make it work, you can run cloud in a container and you can give it a metadata endpoint to talk to and stuff, but it's, I think that you might, if you really want to test the real experience, you should just be testing on an instance wherever you possibly can. So I think the better thing to do would be to focus on making that easier to do on all of the common cloud platforms, making it easier to test on Azure, on AWS, on GCE, and on at least one OpenStack provider. So I have one question for Jeremy. Go ahead. Hang on, can I? Yes. So Jeremy, do you know what Zoom is using for Docker images? How does it work? Hopefully my microphone's working. Zoolz Docker images are actually based on the Debian base image, and then they just add layers for the additional dependencies and services that are installed. Is that what you're asking? So like, does it use, it's like Docker with cloud, there's no cloud in it involved or something, right? I couldn't make out the question. What was that? There is no cloud in it involved in those images, right? Oh, no, no. I mean, you're asking about the container images that are published with the Zool software in them? Or you're asking about Zool testing on container images, because the layers. I want to, so for the other folks, so in Zoom is Docker as a service in OpenStack, right? So that you can launch Docker instances. And I was wondering what it read, and if it would make sense to have some images from the cloud team in this kind of case. Sorry, we are out of time, Seagull. Please. Okay. Yeah, yeah, let's move that IRC. I can talk to you on the cloud channel. Yeah, it looks like we have to wrap it up. Yeah, thanks everybody.