 I guess we're ready. So welcome to an app talk. You might have wondered about the name because it's not very clever, but there's obviously a story because apt is an adjective, but apt is also short for advanced package tool. And how could you call it? You could spell it apt. You could just pronounce it as the word apt, but it's also confusing because there's also a command called apt now, which you really should try because it's a bit easier and it has nice progress reporting and stuff. A lot of people don't know about it, but try it out. It's nice. And this is an app talk because it's about apt, and apt is part of Debian, so hence the name an apt talk. But let's talk about more important stuff. And the first one is actually BSD porting because I figured portability is a great way to ensure that stuff works and to find bugs in software. And we use continuous integration for testing apt regularly. And particularly, we use Travis, which is a free service available on GitHub. And it only supports Windows, Linux, and Mac OS. And I don't really have a Mac. I initially had a port of apt to free BSD, but it's someone requests at some point and stopped working. But I could get it working on the Mac with help on the IRC channel, so I just provided some patches, and other users tested it and compiled it and reported back, oh, test suite now runs. That was nice, but it's not really done yet, and it's missing a lot of functions on the Mac, especially the one in POSIX 2008. So the whole ad family is mostly missing. And we're using more functions now, so it's even request more than it used to. But that's not really an interesting thing to talk about. And a much more interesting topic is unattended upgrading. And yes, we did a lot of work on that. And it started with a cron shop in 1.2.10, or before 1.2.10, it was a cron shop that ran daily, and a daily cron shop runs, I think, between 6 and 7 in the morning. And we had a random sleep of 30 minutes, which helped to distribute the load on the mirrors. So you don't want all machines updating at the same time, because then the mirror just explodes. But that was not enough for everyone, especially some Ubuntu cloud mirrors, where it overloaded. So what we did in 1.2.10 was we switched from a cron shop to a system detimer. And the system detimer ran at two times during the day at 6 and in the morning at 6 in the evening. But we added a random delay of 12 hours. So basically it ran any time during the day, but it ran twice during a day. And we had a check inside the script, which made sure that it only updated if 24 hours had passed. And the whole thing was persistent. So the timer was restarted at boot and at resume if it should have run in while the machine was off. And we still have a cron shop. So there's still a compatibility wrapper for systems that do not use system D, because obviously we don't want to break them. And there are a few problems with this. First of all, it runs at any time during the day. And that's fine for downloading really, but not for upgrading, because, well, if your database upgrades during the day and it stops accessing connections, then your site breaks down for some time. Or if the database gets corrupted or something, it's just completely broken until you fix it. And you don't want to have that during business hours where you actually rely on your service being available. And another problem is that the service, starting at boot and resume, doesn't wait for network, because we did not have a dependency on networking. So we improved that a bit in 1.4.1, 1.4.2, 1.4.3, 1.5, 1.6. So you see it took quite a few iterations to get this right. Basically, we broke the timer in two timers, one for the updating and the other did upgrading and cleanup of lists and packages and stuff. And we made the update job run throughout the day, randomly as the job before. But the upgrade job now was running at 6 to 7 a.m. Actually, it's 6 a.m. plus and minus 1 hour. And that way, it was reliable again. So the upgrades always happen at the same time. But there's a problem with that as well, because when the update runs, because the update is distributed over 24 hours, the upgrade if you have multiple machines, the upgrade could install different upgrades on different machines. And that's not entirely optimal, but I think it's the best we can get. And the timer, we made the timer depend on the network online target. So the network online target is a target that basically depends on various helpers for network managing services, like network manager, wait online, and system D, network D, wait online. And it started at boot and waits until the network is available. But that doesn't really work, because it only helps at boot. It doesn't work at resume, because, as I said, the target only starts when you're booting, and then it is started, and the dependency is so satisfied, and it won't wait anymore at resume. So what we can do about this is we can build a script on our own, which basically just checks which network managers are running, and then just says, calls the wait online helpers of these network managers. But that's not done yet, and it will come later. And another alternative we had was to build our own online waiting helper that just tries host in the sources list file until it connects to one, and just tries that for 30 minutes or so. But that's even more complicated, so I guess we'll start with the whole wait online helper running in the script to at least get it right a bit. Another thing, this is very recent, is HTTPS support, which I rewrote. So in 2006, we had a curl-based HTTPS method, and this was completely separate from the HTTP method. It had no pipelining support. It had no support for using HTTPS proxies and HTTP requests. I don't think HTTPS proxies are very common, but I guess you should support them. It's getting even more important these days, I think. And, well, that was not optimal, so this year, I think last month or so, I rewrote HTTPS support, and I basically just added a compatibility layer to the HTTP method. So now you can use HTTPS support. In the HTTP method, it's installed by default, and it's basically just one tiny wrapper around a socket. So it's completely transparent if you use HTTPS on HTTP. It's the same code apart from calling a few TLS functions. If something broke, please tell us, because, obviously, we can't check all configurations. Oops. Yeah, stop using APT-Key. Why? Of course, in the stretch, we deprecated APT-Key, basically, because we did not want to have GBG installed all the time on small systems, because it has a lot of dependencies like the agent, the GBG agent, and stuff. But a lot of features in APT-Key are required, so it has a list command, which shows you keys and keyrings. And this only works if you have GBG installed. And in stretch, we demoted the dependency on GBG to a recommend, which normally means it's installed by default, but that actually doesn't work because APT is installed by the bootstrap, and that doesn't install recommend. So you might not have GBG installed on current systems. And now it's a suggest, so it's not going to be installed in even more cases, I think. And previously, people installed new keys, basically, by using APT-Key at, or APT-Key advanced. The advanced mode with receive keys on key servers, which is, or was a bit dangerous because GBG didn't really check key IDs if the key matched the key ID you requested. So what you should do instead is drop a keyring into a GBG file in the trusted.GBG.D directory, which you can do since squeeze. Or, alternatively, if you only need to spot stretch and newer versions, you can use ASCII armored files as well, just name them .aic, and it will work. And you can use GBG export to generate the files. And previously, people used GBG keyring, but GBG keyring switched the format in GBG 2.1, I think, to a keybox format. And that's not compatible, so you get completely weird errors that you can't find keys, and everything just breaks down, which is not really optimal. You asked why I didn't make AppKey drop the file? And, well, we should do that eventually, but we can't identify keyrings because the old keyrings, they don't have a magic header. So you can't really identify if they are GBG keyrings or not. But we can at least check if the file is a keybox and then drop the keybox. OK, why? Why? Well, does that not work? Maybe it's the binary thing. Yes, OK. So I know now it's too late. But what I'm trying to say is it would have been nice for everything that was using AppKey if AppKey had been changed to, now that you have to put the file in the directory, have AppKey do that for you so that you didn't have to change the things that were using AppKey. Does that make sense? I know now it's too late. Now we have already changed all our tools. But I'm asking, why wasn't this considered? Well, I think one of the problems is how do you name the file you're putting the key in? So if you don't have GBG, you can't know the key ID. And if you don't have a name, you could pick a random name or a UID or a hash or something. But that's not really optimal. And you have to get duplicate keys. So don't you need to validate the key? How do you do that if this GPG node is not installed? We don't validate the key at all. We just concatenate the files together, and then we run GPGV on it when we're verifying something. A small version of GPG that just can verify stuff. Let's talk about something else. In 1.4, obviously the most important feature is that Moo is now reproducible. They can use the source data part thing. And of course, SHA1 is now completely untrusted. But if you need to use SHA1 a bit, so I think we made. I am not entirely sure which part of the SHA1 we made untrusted. But there's an option for you to make it weak again. Then it only warns that there's an SHA1 signature and no SHA2 signature, instead of erroring out. So you can revert to the previous behavior. And in 1.5, we introduced a new feature that checks if values in release files change, like the code name or the other fields read to the release. So if the release changed, you install stable and you name your source list, entry stable. And it's a new stable now. It will ask you, hey, do you want to upgrade to this new stable, basically? And we also documented the auth.conf format, which is basically netrc for apt. I don't know if it's actually released yet. But if not, it will happen later today. And I think that's it. If any more questions, please ask. Thank you for your talk and your work on the app. It's really appreciated. I especially appreciate the switch from apt-get to plain apt. It's just amazing to save four keystrokes and other command that I use all the time like that. So thank you very much. Regarding that, one problem I have now is I am constantly typing things like apt policy and apt random whatever that fail, because I need to use apt-cache or apt-get. I do still some things that are not in the just apt command. I think most of these should actually work at the moment. So app policy works, for example. OK. So if I find one like that that annoys me, I can file a bug against it. Yeah. Hello, hello, hello, hello. Working. Hello. Yeah. Hello, Julianne. Thank you for the talk. And one problem I have is when I add multi-arc, foreign architecture, then with the R-call packages, you get different versions. And then there's some uninstallability issues. Could we, are you thinking or working on using the binary R-call packages file that FTB master is providing and try to not use the R-call versions from the binary architecture? I'm not sure if we want to do that, but we probably could do that. OK, so. But it's not nothing I really work on, so. OK, it's not in the roadmap then. It's not in my roadmap. OK. So if that is not on your roadmap, what is on the roadmap? So what are the next things that we can see or wait in APT or apt? That's a good question. I think more sandboxing features for the download method and maybe reworked sandboxing. So currently, you need to make files available for the underscore app user, like your netRC file, which is actually fixed now. And keys, if you use private keys with the HTTPS support, then you need to make those readable for the underscore app user. And it would be nicer to just open the files as root and then pass them to the protected method so you don't need to make them available to the apt user in general. That improves things a bit, I think. There are probably other stuff on the roadmap, but I can't remember all of it. I forgot what my real question was. So I use and attended upgrades regularly on most servers I deploy, and I think that's great that it's there, and it's working very well in general, especially in the later versions. What my problem is when I need to upgrade 250 servers between major releases, I end up doing constantly the same things over and over again. I need to use all sorts of tricks and tools and various devices to automate those things. And I was wondering if people on the Debian side were working on things like what Ubuntu is doing with the do-release upgrade script and things like that. And if we can work together on a solution that would allow some automation for major releases and if that was on the roadmap for you. I think it would be nice to have something like that. But we don't have a plan for that currently. But I also had the idea of having some weak conflicts or something, and you could just do the removals of packages, say this package is obsolete and remove that if you want to. That would be nice as a first step maybe. You could have a meta package release upgrade or something at the install, and then it just automatically removes all old stuff. OK, I think we're done. Thanks for coming.