 Okay, so faster unpacking allows us to improve the speed of builds of auto-packages, image builders, cloud instance provisioning, and stuff like that, because unpack time dominates. And we notice that xc is quite slow, it takes about 40% of the time for unpacking. We've eaten our data, so if you don't do synchronized file system, and it takes 10% of the time without it, and we can switch from xc to cstd, which cuts the user time in half. So instead of taking, so this means that, instead of taking 40% of the unpack time, basically go away. And if you use the compression level, the highest compression level zstd has in the normal mode, which is 19, the file size is only 6% larger, which basically means that downloads will be 6% slower at the same connection speed. But that's a problem of course, if you have slow connection speed. But for upgrades, we can solve the problem using delta depth, there will be a talk on Friday about deltas. And this feature is available in app 1.6, but it also needs support in d-package, we have added support for cstd compression in d-package in Ubuntu 1804, it's not in dabbion yet, so we just want to try out if it works or not. If it does, we'll support it and if not, we'll just drop it eventually. And another thing we were working on, which some of you might have noticed, is locking. So sometimes, you know, when you're on apt, in the middle of the transaction, you get d-package error, d-package status, database locked by another process. The reason for that is that we currently have a race condition. So when you run an apt install, apt first acquires to the d-package lock, and then before it executes d-package, it has to release the lock. And after d-package ends, it has to acquire the lock again. So in these two cases, before d-package acquires the lock and after d-package releases it and we relock it, the lock is lost, and another process can run and block us from running d-package again. And the solution for that is to introduce another lock file, which is the lock front-end lock. And apt and d-package both acquire the front-end lock normally. But if d-package is run by apt or another front-end implementing this, apt tells it to not run, to not acquire the front-end lock. So apt will always keep the front-end lock locked and d-package will lock its normal lock. So if you have another d-package, you want to run d-package in parallel in another app or you want to run it on your own, then d-package will notice that the front-end lock is still locked by apt and will not run. Which means that your apt process is safer now and should not be interrupted by concurrent d-package runs. And we implemented this in... It's implemented in d-package GIPs, the current master branch, and it should be released soon in d-package 1.19.1, and patches for apt, python-app, package-kit and other tools will be coming later. I'm mostly ready, but I still need some fine tweaking. Yeah? Can you go? I think... There's a mic. I'm curious if I'm not rude, do I still need all that locking? If you're not rude, you don't lock normally. Say I'm running the upgrade in one window and in another window, I'm not rude. Can I run that non-route queries without being locked? You can run the non-route queries. Sometimes you get weird results because it's an inconsistent state, but mostly it's going to be fine. That's basically the same as it is now. And the next thing that I talked about last, the DevCon was sec-comp-send-boxing. So we added sec-comp-send-boxing last year for our downloading methods, because there's a lot of dangerous stuff in there, like TLS and HTTP passes. So we're working with untrusted input, and we want to ensure that it can do the least damage possible if it's compromised somehow. And sec-comp-send-boxing allows us to restrict the syscalls that can be executed. And then we can make other syscalls, we can trap them, or we can abort the program, or we can make other syscalls return an error. And this works fine for some programs, but if you use libc, it's getting a bit complicated, and you do networking, because there are NSS modules in libc, which allow you to have custom DNS-resolving features, and they can use different syscalls. So you could use, like, POSIX IPC in your NSS module for looking up DNS servers using a local IPC server. And then we have the syscall clocked, and it's not working. And some code in the libc also calls some unexpected syscalls sometimes. And if we're not prepared for those, the app just crashes, basically. It traps the error currently, and then you can't download anything, which obviously is a bit bad. So earlier this year I turned it off again, and I'm trying to figure out how to turn it on again. And finally, the fail is, I think we're probably going to make the syscalls return a permission error instead of doing the trapping we do now, which means that if the syscall fails, it allows the program to work around it and ignore the error. Like, if it can't access its files, it can just use defaults or something like that. Which should make the whole sandboxing a bit more stable. It was what I did originally, but it has the disadvantage that you can't figure out which syscalls are being blocked because you just get the permission errors and not just a straight crash which you can debug. The next thing is related to HTTP method and other methods. And basically you might have noticed that if you have used Google Cloud, for example, which has IPv6 disabled by default, because app used to resolve a track, app used to connect to the addresses returned by the DNS resolver sequentially. So tried the first IPv6 address and the second IPv6 address and so on before it tried the IPv4 addresses. And the timeout between the tries was two minutes. So if you have four IPv6 addresses, it would take eight minutes to fall back to IPv4, which obviously is too slow to be usable. And those are some clouds and stuff overwrote this and disabled IPv6 handling in app. And we can solve this. We solved it in 1.6 by switching to a new protocol which is Happy Eyeballs 2. Or it's not entirely compliant to the specification but it works quite well. So what we did here is instead of trying to connect after each other, we start first by reordering the list. So we alternate between the IPv6 and the IPv4 addresses. So we try IPv6 address, then an IPv4 address, then IPv6 address, then IPv4 address. But instead of doing it sequentially, we do it concurrently. So we start with the first address and then every 250 milliseconds, we add another address and we try them all in parallel using the select syscall. And the first address that can connect is use of the SD connection and the other connection attempts are aborted by closing the file descriptors. And if none of them connect immediately in these 250 milliseconds, we get to a final wait for all scenario where we wait 30 seconds before timing out. And this basically allows us to fall back from IPv6 to IPv4 in 250 milliseconds, which makes the whole thing much easier to use and avoids having to disable IPv6 and apt on images and stuff like that. Another feature we've been working on very recently is this one. You can see what it does. You might be able to see what it does. Basically, the clue is in the last line. It suggests you a snap that's available. And that's not the entire story, of course. We wanted to enable other package managers to be able to suggest their own packages like snaps, for example, or flat pack. And while apt has existing hooks, they're fairly limited in scope and they use ad hoc formats which have not enough information like which packages were given to the install command for the hooks to suggest other packages. So we introduced new hooks and we based them on JSON-RPC. And these hooks get past a socket where they act as a server and then apt calls methods within the hooks using JSON-RPC and provides a lot of data related to the request. And we can extend this in the future to allow apt to act as a server and have bidirectional communication. So for example, if you have apt list box, it could just instead of having to add a pin, you could just block the upgrades for release critical box directly. And you would have, then the user wouldn't see the update for the release critical box. It would be held back directly. And you can make other changes too, like remove some packages, you don't want user to install and stuff like that. And we can also extend this, I think, to a command, let's say apt-RPCD which basically opens the JSON-RPC socket and then allow people to script apt using that interface. So instead of a library, you just open a connection to the socket and tell it to install something which I think might be quite useful. And we have an example hook here. We can see that we're trying to install an existing package called foo. And this hook is called a pre-prompt hook because it runs before this yes-no prompt whether you want to install or not. And it gets multiple parameters. The first one is the command that was used. In this case, it was install the search terms which are the arguments to the command. So if you have install foo, it contains foo. If you have install foo bar, it contains foo bar. If you have install foo bar minus, it contains foo bar minus and so on. And then any unknown packages pass to the command line which basically are a subset of the search terms that could not be resolved. And finally, we have the list of all resolved packages and the versions that are available like the candidate version and the version that was selected for installation. And then you can see here foo version 1.0 was marked for installation. And if you have this bidirectional handling in the future, you could then say like, oh no, let's instead mark version two for installation or stuff like that. And here you can just, here you just notice that you have to install a package foo and you could say, oh, I have a package foo two in my snap or flat pack thing and just say, hey, do you want to install this instead? And then tell app to abort the install. Or you could just print a line, hey, there's a snap called foo, like it did on the two slides earlier. And we also have another thing I've been working on, I think last month or so, which is a new solver because our solver sucks. So we can't find solutions although they exist. And we see that in unattended upgrades, I think, a lot where you get the error message in the title which is package problem resolver, resolve generated breaks. This might be caused by health packages. So to solve this, we already have these external solvers like ASP cut and stuff. And they usually work better, but they are really slow because first we convert to an EDSP format, then the EDSP format is converted to cut and then it's passed into the solver which converts it again and then the whole thing back. So it takes multiple seconds to solve a simple install request. And so my idea was to use the approaches we had from these external solver research but build a fast solver. And for that, I used the same basic solver as ASP cut which is the class solver. It's an answer set programming tool. And it also understands other types of optimization problems like maximum satisfiability and pseudo boolean optimization. Pseudo boolean optimization is what I use here. And the nice advantage of this is that you can find a solution if one exists. And the one goal I have is to try to behave as close to the current solver as possible which means I'm preferring first choices in org groups. I want to install recommends when available. And also, if I have non-candidate versions that are necessary I want to be able to install those non-candidate versions but try to maximize the candidates that are installed which allows us to have auto package test that pull as much as possible from testing and then pull the fewest amount of packages needed to satisfy the dependencies from unstable which will be really useful I think. And that's it for app itself. So in other news we have Python app. Now checks that packages belong to the same cache. So previously when you did adapt cache.mark install package and you had reopened the cache in between it would just crash do nothing or just do anything really because it was either it was a different package or it was out of bounds or yeah. So we just now raise an exception if the cache is different which makes the whole thing much safer. And there's a workaround for existing code in the high level app module which automatically remaps these objects when reopening so you can just use existing code and it doesn't just break which is really useful and we also have fully static typing now in the Python app module which a lot found a few errors in the code. It's nice. So from the deep package maintainer he wants to let you know that you should stop accessing valid deep package directly because the format will change. For example, soon the list and MD5 some files will be dropped and replaced by entry files. So yeah, just don't use valid deep package and aptitude has a new release now and it also builds faster than you used to. So it says times on the build these. Yay. And the package could land, we now can remove automatically remove related unused dependencies when you're installing a package which helps avoid cluttering your system with packages you don't long and you don't, you no longer need. And finally there will be a talk on DeltaDabs on Friday which you might want to attend. And that's it from me. If any questions, I don't think we have a lot of time like one minute or two. So if you want to ask a question, go to the mic and ask or just come after the talk. Thank you. It's now incompatible. Can you start from the beginning? Yeah. Repeat. Can you tell me why that ABT, ABT get and aptitude are not compatible? Well, the apt and apt get are the same, basically they just have some different defaults. Can AB configure? Sorry? Can the parameters be configured to make them have the same action? Well, you could do that. It's just different default config options basically. Okay. Overwritten per binary and you can just replace the per binary overwrite in your config file. Well, yeah. But what? It's not really documented that well. Okay, thank you. Okay. That's it.