 So, my name is Alex, I'm with Red Hat for the last 18 years or something. I work in the desktop group, so I normally work on GNOME stuff, but the last three years or something I've been working on is a project called Flatpak. And I'm going to talk today about the core implementation of Flatpak and how it works and why it works that way. But first, maybe I need a short description of what Flatpak is. How many people use Flatpak? And how many people heard about it for? Not everyone, but most, okay. So it's apps for the Linux desktops. And obviously we had applications for the Linux desktop forever. So apps mean something else, it's like Apple phone apps style or Android apps. You can expect to easily install them and fearlessly do so without having them messing up the system in some way or affecting other apps or your system updating, affecting the apps. Like you want the apps to be an isolated thing from the core operating system. So there's several parts to this. There is a distribution of your app. We have a repository format that you can add this remote and you can install stuff from it. And we have A, this is Flathub, which is one source of Flatpaks, there can be many. You can configure many yourself, like a jump repository or a Debian repository, there can be many and they're independent. This is the website for Flathub, but the data is all there, so you can use it from the command line, you can use it from GNOME software, KDE Discover, or just click on install on this website. And they all use the same data, the same screenshot descriptions and whatnot. And then, and this is kind of the core, you can run the same app, the same build on any distro. These are the distros that we have on descriptions of how to install Flatpak on, but fundamentally it should run on anything that has, like, not even recent kernels, but, like, pretty ancient kernels still can run this thing. And not only can you run on any distro, you can also, like, distro hop while having the same apps installed, you can upgrade your distro and it will not affect the thing. And you can run a newer version on an older distro and all that kind of stuff, right? So the common example is to use a really stable, REL or Debian stable base, but still want to run the latest GIMP or some specific app. You can also build it anywhere, because building itself, I mean, there are many ways to build stuff, but the default way we build Flatpaks builds in a sandbox, too, so you can have two people sharing a development environment, even though they're on completely different systems, and they will produce essentially the same binaries. There might be some timestamps and whatnot, but the rest is shared, right? And it also runs in a sandbox. We're talking about desktop apps here, so sandboxing is complicated. There are very many integration points between the desktop app and the desktop, or even other apps. So sandbox is available, and the default sandbox is very secure, but you can't really do a lot of stuff in it, so you can also opt out and, like, the user gets asked, you want to grant these permissions, and you can, like, give your app file system access and whatnot. But we're also working on the other side with things like Wayland, PipeWire, Portals, to make it possible to have sandboxed apps that are actually useful, but that's partially in the future, but right now you can use it for any app, but it's just not as secure. So you might ask yourself how is this different from containers, because all of these things I've talked about before, you can also do with Docker, right? A Docker image can run on anywhere, you can build it and build out or Docker build or something and ship it somewhere else. So there are two main differences. First of all, the target user of Flatpak is fundamentally non-privileged. It might be, like, a data entry clerk who has an IT, like, IT gives him a laptop, he doesn't have sysamming on it or anything, right? He is not root, he doesn't have the root password, so we can't have any tools like Docker that requires you to be root or that implicitly grants you root rights, like, if you have access to the Docker socket, that is giving you root, and we can't have that. And as an add-on to that, none of the techniques we use in Flatpak itself can fundamentally rely on root-only features. That means certain things just cannot be used. We can't do device mapper, we can't use butterfist snapshots, or we can't do firewall set up for port forwarding, because all these are fundamentally, like, sysadmin level operations that are not allowed for any kind of unprovided use. The second thing is that the goal is to run desktop apps. So we automatically integrate with the desktop in all the ways that matter. Like you install the app, it's a single click, and when it's installed, like, there's a nice graphical progress thing, and when it's installed, it appears in your desktop, it has an icon, it has a description. It can install MIME type files or DBA service files. It has, like, access to X or Wayland, pulse audio for sound, or it can talk to DBAs in a safe way, it has the right GPU drivers in there. If you use free desktop specs to find your stuff, you can easily find your stuff. We have interactive permissions, like the portals I talked about, it's basically a way to have low permissions, but then at run time, safely grant extra permissions, like access to a particular file or web camera or something. So to build Flatpak, there's two major parts beneath it that kind of has to be explained in detail before we can explain Flatpak. And actually, once we have these, like a working knowledge of Austria and bubblewrap, the rest of Flatpak is mostly trivial, just using these in the right way. And Austria is like the short quote, it's like it for operating systems. And by saying it's like it, I mean that all the things you'd expect from Git are there. There's a remote repository, there's a local repository, you configure the remote, you pull from the remote, you pull refs from the remote, you can commit the ref, you can check out a ref, a ref points to a commit object, which is a checksum, and like it contents address the rest of the files in the commit, it's essentially a reimplementation of Git. But where Git is made for source code, and if you ever try to commit like a large binary, it just isn't meant to handle that, whereas Austria is all about having a on-disk file format that is useful for trees of binaries. It was originally made for committing your entire operating system into Git, basically. So we are going to look at the repository layout, which is the way that Austria differs from Git. So we have here, kind of hard to see, but yeah, an app, it's just a directory with some files. But in the end, that's what all apps are, right? It's just directory with files plus some metadata. So there's a bunch of normal files here, and I color them so you can kind of follow where they appear when we commit them. So if you were to commit this, these are the Austria operations you run, the repository, and then we commit to a branch called master, this directory called app, and it generates a commit ID for us. And you can look at it, and it looks like it's a commit. That's basically what Git would give you, right? It generates this repository here that if you ever looked into a .git directory, it's very, very similar, right? So there's a refs directory that contains the refs, and ref is like a more generic thing than branches, but think of it as branch. And they have a bunch of objects, and if you look at this ref file for master, it's just a, actually, I cut down all this, they're actually very much longer, but I cut them down all to ten characters so you can read them. It's a commit ID, and if you look in the object directory, you can find the commit here, and the commit is just a small metadata file that contains the commit message, the timestamp, the previous commit ID, because it's like a history of these, and also a pointer to the root directory of the thing. And that's just also an ID, which is to check some of this. So the red thing is the root directory, so it has a pointer to this, which is also a small metadata file that basically says, this is a directory, and it has a file named bin with this commit ID, or a directory, and a file called readme with this commit. And then you can recursively look into everything, and if you eventually reach a regular .file, which is just a normal file, if you look at that file inside the repository, it's just represented as itself with a different name, and the name is a checksum of the content of the file plus some subset of the metadata, like the user that owns it, the permissions and extended attributes, and a couple of things that are part of the identity of the file, because if you store two files in the same repository that have the same content but different permissions or whatever, they will be two objects in the object store. And then we can modify this. So we change the readme file here, and we commit again, so we get a new commit. I don't think you can see it, but it's like a brighter version of the color when it's changed. So what happens then is that we add, we changed the master here to point to the new root commit object, which now points to the previous one, plus the new root directory, and the new version of this file here. All the other ones are left alone because they weren't modified, right? So we modified the app and the readme, and you can find both the objects at both versions in there. So an interesting thing here is that committing a new three is a purely additive operation, plus an atomic switch of this file at the end, basically. So operations are very safe in that they're not destructive or changing anything. And if you were to check this out, this is just the same repository. If you check this out with a check, that is to a directory new. You get a directory new, which looks like the original thing, except if you look really, really careful, this file has a hard link count of two, which they shouldn't have normally. And if you look in detail, you see that actually they're the same inode as the repository objects, because checkouts are fundamentally hard link farms back into the repository. So they're very efficient to create. And if you have multiple checkouts of the same or slightly different things, they all co-share inodes, so it's very efficient in that sense. And the way you're meant to use it is that you mount the thing read-only. Like when Ostr uses it for its in-Ostr as the whole operating system, you basically commit slash user into Ostr, and then you mount it to read-only. So it's safe for these hard links to point back into the repository, because you're never able to modify it. And there's also a version that isn't meant for local store. If you put the thing on a web server, it's a similar format. The main difference is that they are. The files are compressed, and the metadata of the file are in the file. Plus there's a file called summary that has all the refs in a single file because you can't naturally enumerate all the refs on a web server, right? So you can read a single file, and it will tell you the name of all the refs and their current commit, so then you can recursively find anything you want. And if you have a local version of a ref, and the new one, or the upstream one, is different, you can very efficiently calculate which objects you need to download to make your local copy the latest one. There's also GPG signatures in this commit meta file. This is basically, you can add metadata to it, including a commit, or a signature of the commit, and you can sign the summary, so we can protect against men in the middle kind of stuff, or downgrades, or something. It's also, it's not listed here, but there's also an optional static deltas directory where we store basically binary diffs between specific versions. So if you're going from maybe the next to last to the current version, there might be a specific delta file you can download, and that will give you even better download performance, right? If a file changed, but it's just one byte that changed, you can just get a really small delta. If you don't have the delta for the particular download you want it to do, you can fall back to the regular OSTRI pull. So, the reasons we use this is the automatic deduplication is very nice, because fundamentally, containerization and sandboxing and everything is about bundling. So we want to try to minimize the cost of bundling. So anything that accidentally gets shared, because it's the same file, just automatically gets shared. So if two completely different builds happens to have the same icon, so what not, they will automatically be shared. And if you have reproducible builds, it's likely that most things will be shared. And they're also shared both on disk and in the page cache, because page cache is per inode, and the inode is the same, because they're all hard links. We also get very efficient updates due to the deltas and the quick way to get what's changed in a version. And updates are atomic. The way you use OSTRI is that you check out... In Git, a check out is almost always tied to a particular Git local Git repository, but in OSTRI you normally have one shared repository and multiple checkouts on the side. So it's very safe to do an update of the repository and then a new checkout that doesn't affect a running instance of a previous checkout. And they can atomically switch over. And also, this needs nothing in from the kernel. It works on Unix version 7 or wherever hard links were introduced. Just isn't anything complicated or privileged here. So the other tool is Mubblewrap, and it's an unprivileged chute on steroids. And I guess most people here know the virtual file system, but we have a bunch of physical file system. They're probably stored on a device somewhere, or maybe they're tempfs. And in the kernel, there's a mount table that is basically a sequence of, take this here and put it here, that builds up the virtual file system. And nothing super exciting about this. One might notice that, I don't know if you can see it here, but there's C is mounted twice here, so this file foo is available here and over here somewhere. In the kernel, it's just another mount, but most people know them as bind mounts because the way you create it is if you go to the first mount and then you take the existing thing here and bind-mount it here, but in the kernel, it's just a list of mounts. And at some point, someone added this root, it's just called because maybe it would be cool to virtualize what apps see. So instead of the root directory being a global thing, it became a per-process thing. So basically, a global variable was made like a member of the struct process thing. And that this way, apps can see different things in terms of the file system. And also, a lot later, namespaces were added, which basically makes the entire mountable to be a thing that can be different in each process. So mounts can be completely different in namespaces. And in fact, some of these physical things might be visible in some, but completely invisible and not even mounted in other namespaces. So root is really cool. And so most people like it and use it. But for me, it's not very useful because you have to be root to run it. And why is that? So Unix, the kernel, is a very simple thing. And to create a user-space operating system on top of it, it relies on certain things like Set UID to increase privileges. A simple example is sudo. sudo is an app that is Set UID. So anyone, whoever runs it, turns into root, which is not good. So the first thing sudo does is look at etsy-sudoers and see, are you allowed to do this? But this relies on the fact that etsy-sudoers is actually the admin's file. If you're in control of the file system, you can point it to something else that says, yes, I am allowed to become root. And suddenly, you broke the system. So this implicit trust in things like privilege-raising operations, there are many of them. Set UID, Set Caps, or other, I guess I haven't thought that many, but there are a couple of them. So with the introduction of Privileged Username Spaces, and I think actually the username spaces themselves aren't vastly interesting, but this process control switch was added that you have to enable to make unprivileged username spaces work. And it basically means this process and all its children can ever, ever raise privileges using all these security-like operations. And that means a root operation is safe because just like the sudo or whatever inside your sandbox can never actually become root. There's also like mappings of UIDs and whatnot, which is kind of nice, but not necessarily important for me at least. So I wrote this tool called Babarap. It was initially part of Flatpak, but it was an interest from many people made us extract it to a separate thing. But it's basically a shell wrapper that just uses all these kernel features because while they are just kernel syscalls, you have to be very careful when you set them up and it's really messy and hard. So it's just a simple tool where you can create your own namespace. And it's slightly different in how it works from this root. In this root you kind of from the outside build up the entire thing and then you jump into it. Whereas Babarap starts with a completely empty system with only root being a tempFS. And then you can like from the inside but via commands from the outside, build the thing up, right? So this example starts with nothing and then we create the directory in temp and then we make a read-only bind of the host user on the user in the sandbox and then a sim link from lib64 to user lib64 and then we run user bin ls in the sandbox. And this is all it sees. Obviously this only works if you have a lib64 style host, but on a Fedora that will run in a super minimal sandbox that only has read-only access to user and nothing else. And other than the primary way of setting up the file system, we have a bunch of other features that enable basically all the things you can do unprivileged on the current Linux kernels. So you can unshare more namespaces. If you unshare the net one, you only end up with a loopback and nothing else. And you can unshare all of these and you can mount certain kinds of file system. Well, dev isn't actually a file system, but if you say mount dev here, it will create one that has bind mounts of the host slash dev nodes because we can't create device nodes as a user, but we can link to pre-existing ones. And if you unshare the pit file system, we have a handler for pit one so you can read children and not get zombies and whatnot. And secomp, that's a way to pass a binary of secomp rules into a bubble wrap that will get applied after everything is set up. We apply the secomp rules at the end so you can, like, the secomp rules doesn't have to include everything to set up the environment. There's also a set UID version of bubble wrap. The regular one works fine on Fedora and Ubuntu, but a bunch of other distros disable unprivileged username space or put them behind some kind of kernel flag or boot flag or something. And the reason is in an unprivileged username space, you are root in some sense. You root against that username space, but there is a, I don't think anyone thinks that the general things that username spaces expose are unsafe. It's just that there might be other part of the kernels. There's this enormous syscall API and not all of it is correctly namespaced. So something might process this UID zero as meaning it has access to something where it actually is just UID zero in this namespace. So we have a, well, it's the same basically. If you take the bubble wrap binary and make it set UID, it will switch to using a set UID model of operation where it's normally mount stuff. It does all the same thing. So it should theoretically not be any less unsafe than the unprivileged one. And in fact, one could argue that it's more safe because it doesn't expose the full, it's just called ABI to this fake use, fake root mode. So yeah, that's useful thing to make flatback work on all distros. So I think we have now everything we need to take a very, very high level view of how flatback work. Every installation has an Austria repository, a local one. And there are actually two main installations. There's a per user one in Varlib flatpack, and there's a per user one in each home directory. Every app and something called runtimes are committed to this Austria repository in branches that looks like this or these are refs. So this is an app ref and this is a runtime ref. And like this is obviously the name of the app or the runtime and the architecture and there's like a version thing. So those are in the repository and we pull them from a remote and then we check them out next to the repository. This is called deploying them in Austria. And then we run them with bubble wrap where we put the files from the app, from the app in slash app and the runtime in slash user. And these are all read-only mounts so we can't affect them. And obviously there's a bunch of more here to set up like we have a temp and we have a proc and we have a dev and various other standard stuff. You can also request access to like regular file system paths like if you want home directory, we'll add like a bind mount of the home directory to the end. But this is the basics of how we run an app. And if you look at an app that's deployed, Invalid Flatpak does a directory called repo which has the repository, but they're also like a name which is the entire ref. And in that we have the actual thing checked out under its own commit ID and a simulink called active that points to it. And if you look in that active directory, this is the checkout, it has some metadata, the name of the thing, what binary to run, environment variables to set and files is like the regular file, it's like a prefix. Basically someone built into a prefix and made it into the app. So it's got bin, lib share and whatever else you might have there. There's a subject you call export that contains special kinds of files that we want exported to the file system or to the host file system. It's currently allows icons, desktop files, mime type, debus, service files and I think an own shell search providers. And also after checking out Flatpak itself, creates this deploy file and this ref file which are not actually part of the upstream thing. The deploy file just says this checkout came from this remote. And the ref file is actually just a file but whenever we mount this read only, we also have the pit one of the sandbox take a non-exclusive ref, non-exclusive lock on this file. So at any point we can look at a certain checkout and see if it's locked. And if so, something is using it and we have to keep it around. So the typical Flatpak install command, you just give the name. So we have to figure out the entire ref but we know it's probably for the current architecture and then you specify a remote here and the remote has the default branch name so we can figure out what you actually want to install. So we pull the entire ref to the lock of repository. We figure out what commit we actually ended up with and we check this out and create these files and then we sync everything up until this point we only created new files, nothing affected anything running in the system or that you could run. But then we do an atomic set of this active sim link and at this point, Flatpak run will find the new version. So we will say later in an update there might be multiple commit here but there's only one that is the active one switching between them is an atomic operation and then at the end, we also look at all these exports and copy them into shared directory. Well, it's not a pure copy. We look at the files and rewrite some stuff and make sure it's safe and whatnot but basically we copy them and an update is the same except when we're done all the old commits are moved to this global directory called dot removed and then for everything in that directory we take an exclusive lock on this ref thing and if we can't do that, then it's still in use and we leave it be there. Moving the directory somewhere else doesn't affect the running instances. So if your Firefox is running it will not notice anything changing until the last instance of it exits then the lock gets unlocked and we can take an exclusive lock on it then it's fine to delete because at that point nothing could ever refer to this old thing because it's in this sub directory and the sim links are changed and whatnot. Obviously if something was running and we updated it we will not delete this thing so it would be left around possibly forever but generally we run update again and it will traverse all the removed files and eventually it would become garbage collected. On top of that we have a top-of-the-line sandbox or something. We enable a bunch of stuff. User namespaces is optional because it's not always available. In the set UID version of Flaphack we might not be able to support a username space in case it was completely disabled in the kernel but if it is we enable it. Obviously we use the file system namespace but we have to, that's how we make sure we run something else. We also use the pin namespace because if you can see proc basically all the other apps that are running on your system are at least all of your apps are visible in proc and you can kind of use those proc fd and whatnot to look into what those other apps are doing. So we isolate that by using a pin namespace but it also kind of neatly gives us a pit one that controls the lifetime of the entire container. So we know we can kill that to kill all of it but we also know that when it exits the last process in the sandbox died. Optionally we do network and IPC namespaces. We do seccomp. It's very hard to generically do seccomp for everything. I mean, we're supposed to run arbitrary desktop apps, right? What can we forbid? Well, there's some really weird stuff like who uses X25 or whatever. There's weird socket ties we can disable some really weird syscall. The key ring is not, the kernel key ring is not a namespace aware. I think it's somewhat, but yeah, we would disable that. These things, ptrace, perf, multiarch are useful for development. So we have modes where you can request them and if we're building stuff, we enable them by default but in a regular app, you would really not need ptrace. So we disable it by default. We also disable recursive namespaces. If you actually have real unprivileged news and namespaces, those can stack but we wanna avoid having the app control the way, like the layout of the file system for similar reasons to the sudo incident before we wanna be able to trust that we set up the sandbox in a specific way and it keeps being that way. Obviously every app has its own temp because it's just part of the root tempfs. And we can't generally add cgroups so we don't use cgroups, except if you have systemd user running, we create a scope for the thing on the user systemd instance. This doesn't currently give us a lot though. Like we can't apply any policy on the group because that's not doable as a user but hopefully eventually cgroupv2 will save us all and whatnot and then we can do cgroup limitations and whatnot as a user. We also have something called the deepus proxy. So every app has access to the bus, the session bus that is and have certain privileges on it but everything goes via this proxy that filters calls so you can basically only be called into plus do some very, very open system wide calls like talk to the bus itself and own your own name and a couple of other things. In case you're not requesting any specific permissions, this directory is created for you and various environment variables points to it and it will be the only writable persistent directory where you can store data. Obviously, if you grant your app home direct access, you can write wherever you want but the idea is that in the long run, when everything is sandboxed, we will have a directory where every app has its configure files, its cached files, its whatever. So when you uninstall it, you can also easily take away all the per user data that the app installed. Here's a couple of links to various projects and ways to reach me and the project and if anyone has any questions, yeah. Do we have a microphone? Thanks. I have my home folder still on a rotational drive for me OS3 is really slow to the point that the flatback update of the GNOME platform can take up to seven minutes to complete. It's R8.1, it's encrypted, ButterFS or copy and write. It's kind of a worst case scenario but are there any plans to add another backend? For instance, we see a sync in place of OS3 or are there any performance improvements that can be made? See, I think it's more about having the download be efficient. I don't know if it actually stores in a deep, duplicated way that we can use directly. It does? Well, fundamentally though, should that, if it has the same layout where you explode every file as a file and then F sync and make it atomic and all that, if you do all that, isn't it still as slow though? Because that's the problem, right? You write all these files all over the place and then you sync it and you end up in a journal or whatnot. Well, if you're ButterFS, you're not doing that. The similar issues on the XA4, right? F sync something, you force the journal to be flushed and everything else blocks and whatnot. But I don't quite see how CS sync would not have the same issues in the end. It's just if you atomically want to ensure this large tree of miners is on this, it's gonna take a while. I don't see really any way out of that. Yeah, over the years we've been developing Flatpak and it's been talking about a file browser so that you wouldn't have to give full access to the home directory and you could basically have the file browser sort of inject files into the app. Has that moved up forward at all? Yeah, yeah, so that's the file system portal, file chooser portal. It's basically a combination of a FUSI file system where every app, every Flatpak app that runs has a bind modded in it a subset of this FUSI file system that is called the document portal and you can register a file and create a document for it and then it's visible in the document portal mount. Plus there's a subdirectory per app and that subdirectory gets mounted into the sandbox so you can dynamically add the files to target a specific app. So what happens is you do a debus call to the portal and outside of the sandbox. We have full access to everything so we do a file chooser and at the end we create a document for this file which makes it appear in the sandbox of the running app and then we pass back the document identifier and then we can access the file via that file system. It, what doesn't work is exposing an entire subtree of your file system. So it's mostly for like, I can open a PDF in events in sandbox mode but it doesn't work for Emacs or an IDE or something like that. But I could run a Firefox or a E-bench or something like that. Yeah, that works. In the beginning you mentioned Apple apps. They seem to be much, much simpler basically just directories on the Mac or zip files on iOS that were part of the simplicity stems from the fact that everything is either part of the operating system or part of the application. Why does it have to be so much more complicated in the Linux world? Well, there are two things. First of all, the sign of SAP HAC is meant to work on current apps. And current apps in Linux hard code the prefix and the binary. And over time people have tried to make things relocatable and some like snaps for instance, relies on relocatability with like as loads of environment variables and things. But still there are things that aren't relocatable. Get text is just a hard coded prefix. It just isn't possible to use a system that is relocatable on the current Linux apps basically. The second is that Apple has a single ABI that they maintain that you can use and given my history of like 20 years trying to keep ABI in Fedora and whatnot. It just isn't doable. It's not even doable in Fedora to keep a single ABI for between versions and sometimes not even in single version. And doing it between distribution is just, I don't think it's realistic at all. For that question here, doesn't that mean there will be a proliferation of run times? For example, if I have application version one requesting runtime version one, is this same application still able to use runtime version two when it comes out or will it still require runtime version one? Well, it would still use one until you switch it. Like until upstream decides that it's time for me to switch to the new one and like do all the QI to verify that it's now possible. And I think, yes, there is a risk for a proliferation of run times. We try to make them, for instance, we have a shared free desktop runtime and the GNOME and the kitty run times all use that. We try to keep it down to a small amount of stuff. But fundamentally, I don't think we are gonna end up in a situation where we have apps depending on like really old run times because apps wants the newest thing too. That's part of the reason why we can't keep ABI because apps wants to use the new stuff. So I think it's in their interest as well as the users to move to the newer version. So eventually we'll be able to purge old versions of stuff. But in the broader ecosystem, they'll still be around. Like there'll be a bunch of old versions just like there's old Isis and Fedora 5 or whatever. Doesn't mean everyone uses them. Within a flat pack, where do we look to see how bubble wrap is used if we wanna like learn more about it, just verify how it's wrapping the environment. I mean, you can look at the code or you can run flat pack run dash VV. I think it will print all the bubble wrap command lines. Runs about you. So one often stated advantage of classical package management over application images is if I have a commonly used library that's used shared by many, many applications and that library has a security flaw and that flaw is fixed upstream by the library. Then I only depend on one new package update from that library. If we use application images, I would have a zoo of versions of that library spread in my system. How do I get it safe and secure? So one of the partial responses to that is that we have the runtime system where we can rev the runtime independently of the app. So even if the app is not maintained, we can rev open SSL or libpng or, and in fact, when we're designing the runtimes, the aspect of the runtime that there is security sensitive, we ensure that all those libraries are in the runtime. So we can also, but it's still true that some app could bundle a specific directory or a specific library. Maybe they need an old lib SSL or maybe just like some random library they need. That could have security issues. Sandboxing partially mitigates that, but not completely and it's a well-known issue with this kind of solution, but I think it's worth it because fundamentally the current app ecosystem for Linux just doesn't work. As an upstream author of apps, I cannot get my code into user's hand. It's a much for me as a developer of things I think are cool and that my user thinks are cool and they want to use, they can't use them because just the system is broken. And if they use them, they use an ancient version and reports bugs I fixed two years ago. It's ridiculous. And yes, there are risks with it, but it doesn't mean we can keep doing something forever that doesn't work. Okay, thanks. Has anybody looked at developing tools to be able to scan through your flat access? We have a very simple CVE tracker that looks for CVEs, like looks at the versions in the run times mainly, but we want to deploy it against all of the flat hub manifests also. But it's very simple. It's like looks at, you use this module, this tarball, that has this version that's reported CVE against that. We file a bug against your GitHub repo and I mean we're not actively looking for new bugs. Has anybody done any basic Chrome apps or Android apps for Linux in Flatpak? How do you mean? Well, I've heard, I know it's been talked about. Basically, can I package up an Android app and put it into a Flatpak? Now Android apps, there is this Anbox thing, like lets you run an Android apps. I don't think it's doable to run that as Flatpak because it has kernel side features. Like there are kernel specific, or Android specific kernel APIs, like the binder and whatnot that you might have to use. But for instance, Chrome apps, I think should work. I mean, we have a base application for electron apps that makes it really easy to package electron apps. Those are basically like Chrome apps, aren't they? You mentioned the exports directory to integrate stuff with the system. Is there a multiple versions? Story to that, what happens if I have Libre Office and say five different versions on the system? So this is the concept of which one is current and only the current one wins and you can switch which one is current. So I can't say, for example, open this file always with this version of the app and that other file with the other version. You can manually start any version on the command line by using Flatpak Run, but there's only ever gonna be one desktop file which is the current active one and you can switch which one that is, but it's only one at the time. Do you think this current model of text files for MIME types and icons and so forth is really scaling or wouldn't we need something like launch services on Mac where you have a database that really can do more flexible on demand stuff? I mean, my name is on half of those specs, so, but that was 15 years ago. So could they be better? Quite possibly. Will they? I doubt it. I think we're out of time or out of questions.