 Well, it's 11 o'clock, so I suppose we should start and I can introduce myself first so that we can get some track that's coming in. So my name is David King. I work for Red Hat in the desktop group, mostly doing GNOME stuff. But also a few other bits, like helping out with some debug stuff. And now I've started looking into using OS3 for the Fedora Workstation. So I'm talking to you about that. So what's the Fedora Workstation? Well, Christian actually gave a talk about what it is and what it's going to be just an hour ago. So if you missed that, you can probably find out more from him and he'll be around. But basically it's now a developer workstation. So it's a traditional desktop environment but targeted at developers. So it's got a bit of a developer slide in terms of the available applications, in terms of how leading edge the components are, that sort of thing. We're using GNOME quite heavily, although there is a KD spin or other installer that you can use. We use lots of stuff from 3ds up to old, so GNOME isn't just the visual stuff that you see. You've also got all the bits underneath, like debus. We depend on system D for all sorts of things like suspending a lot of power management stuff, new power. So not just the graphical environment that you see, we depend on a lot of other things. There are a lot of other applications in the workstation. Leave Rockets is probably the biggest one. But there are a few others, some sort of Fedora. You could say Fedora-specific things like abort and stuff like that. And some other bits and pieces here and there. But mostly it's a pretty stock GNOME installation with some other things. So what do we want to do next? A lot of what we're doing in the workstation right now is potentially changing the way that we distribute applications. A big thing that's been developed by one of the guys in the desktop group predominantly, Alex Larson. He's been working on XCG app, which became Flatpak. It's part of a rebranding exercise. It's mainly a way to distribute applications. You can also think of a way to sandbox applications from the point of view of not allowing an application to do as much on the host system as it could, or as an account at this moment. The sandboxing stuff isn't quite there yet. It isn't fully fledged. It depends on a lot of things that are still moving. It depends on Wayland. And also it depends on kernel features that haven't really completely been fully shaken out, like user namespaces and C groups, which are all changing at the moment. So we don't want to hype the sandboxing too much and say it's all amazing and you should use these things and it's the best thing ever. Because right now that's not the case. It will be eventually the best thing ever. But the sandboxing stuff is kind of secondary to the distribution side at the moment. So it has a concept of runtimes, which is basically roughly slashed user on a traditional Linux system. So something like a collection of libraries and other dependencies that an application would need to run. So you can imagine that you'd have a GNOME runtime, which would be something like GTK, Cairo, GStreamer, maybe, or maybe you'd put that in a free desktop runtime. Basically everything that an application targeting that platform would expect to have available. So you want that to be as useful as possible to people developing against that runtime. You envidges that there'll be stacked runtimes. So you have a free desktop runtime, which contains maybe some more lower level stuff. You have a GNOME runtime on top. Maybe you have something else. But we also don't have to say that too many runtimes are going to be out there because that just makes it harder. It basically means that users have to download more stuff. You also have applications, which can include some of those components that you would traditionally have in a runtime. So say that your application wants to bundle, I don't know, a particular GStreamer codec that, say, your distribution doesn't ship for patent reasons. You could do that inside an application bundle if you wanted to. And that works pretty well at this stage. It depends on things being plug-ins or there being some kind of extension point that you can pick into. But that's very possible. In terms of actually installing and using these flat pack bundles, there's actually graphical support right now. Richard Hughes has been working on this. Caller has been working on this as well in GNOME software. He's going to give a talk tomorrow about that. So I definitely recommend that you go and see it. It's pretty cool. He's been doing a lot of work with Alex on the flat pack library side to make this useful for GNOME software. So that's been like, there's been some really, really rapid development there. And already, even in Fedora 24, I think a lot of the stuff has been backborded. So it's already pretty usable at this stage. I talked briefly about the sandboxing. And what that generally means is that right now, if you run an application on a desktop machine, then that application can do anything that the user permissions would allow it to do. So it can trash files in your home directory. It can write anywhere that you have permission to write to. It can do quite a lot of stuff actually. Maybe it can print something. You know, you can do all sorts of things. You can probably change the application volume through Pulse Audio. There are a lot of things it can do. And a lot of things it's actually very difficult to prevent with the way that applications have run at the moment. So what Flatback does is it uses user namespaces, POD namespaces, cgroups, and a few other technologies as well. And it bind mounts a pretty empty system and gives the application access to that and nothing else. This is primarily for desktop applications. So it's not like Docker in the sense that it's not really for a sort of traditional serve application where you run a binary and that probably needs access to a particular port. It's not really set up for that. You could probably run serve applications inside Flatback, but it's really not the use case that we're envisaging. So you're probably not going to have a good time when you do that. It's really for desktop applications. So we have special features perfecting into debuts so that if your application needs to access a particular debut service in the user session, you can open a hole in the sandbox. There's a debut property that's included with Flatback. And that means that you can control access much more fine-grained access to the system in the session bus with debuts. As I say, the sandboxing isn't quite fully fledged yet. It's coming along. There are problems at the moment with things like accessing decon, so what you'd store your user settings in using G settings. That's a really tough problem to solve completely. There's some really good work on it. I mean, Albert is nodding his head because I know he deals with this quite a lot in the fleet command. It's tough to solve this completely. There are some cool ideas going on. Alison has some really great ideas about how to solve this, but I think the code doesn't exist in the usable form yet. So that's going to take some time to solve. There are lots of things like that that are going to take time to solve. I'll talk later a little bit about portals, which are the way that we envisage users will interact with the system and get access to things on the system that this handbox would normally prevent. There's some really good work going on about that. So I'll talk about that later. Of course, it sounds great if a developer can quite easily produce an application bundle. They can put all of the software they want inside that bundle, and then they can ship that directly to users. It sounds fantastic, except if you're used to a distribution model where actually a lot of the control over what packages get to users is under the distributions control. I mentioned pattern encumbered codecs as an example of something where with Fedora this is a problem at this stage with, say, MP3, where we don't ship any MP3 codecs, because currently there are still a few patterns that are relevant to it. That becomes a bit more difficult because then you say, well, who's liable if someone tries to enforce this in some way? Is the person distributing the bundle, like what if that's linked to from something that Fedora has? It's a complicated question. There are a few other complicated questions around that, but mostly they're not really technical questions from the development side, then more sort of legal issues and things like that. Security updates is quite an interesting one. Haven't really been solved yet, but on the other hand, if you think about what a runtime is, it's like the GNOME stack. You've got all your libraries that would say loads of image formats so that you'd expect there to be probably some vulnerabilities in those image loading libraries. It depends on who ships that runtime as to who's responsible for the security updates. If the distribution ships it, then that's something that's under the distribution's control, and if they update that library, let's say libpng or something, and there's a security fix, you'd expect the runtime to be updated at the same time. If someone's bundled that, like Firefox would probably do that, for instance, because they have their impact, it's the libpng for animatedpng support and things like that, they'd probably have to update that pretty quickly as well. I don't know how that's going to work, but it's going to be a problem that's going to have to be solved at some point, and this is going to be a social problem really, as well as a technical problem. Also, I mentioned that you can, as an application developer, put pretty much anything you want into the runtime if you need some extra support. That also kind of implies that you can override what is on the system. So if there's a version of a library on the system that you don't like because it's got some patch there or is more recent and caused problems because you're not used to that and you haven't developed against that or tested against that, you can override that by putting into your application bundle whatever you want. So it remains to be seen what's an acceptable level of overlap there and what amount of overriding you can get away with and users can tolerate. So we'll see how that goes. And of course that does somewhat lessen the impact of a distribution as a whole because if you look at the workstation right now, you've got a very large amount of software there in terms of the number of packages and things involved. You've got over a thousand packages. If your flatpack runtime takes out, say, 500 of those packages in the middle and says, well, that's the runtime, that's what you can depend against there, where does that leave a distribution for those 500 packages? Obviously flatpack is not going to take over the world overnight and you're not going to completely switch to it in one smooth transition, but it does going to make you wonder where the lines are going to be drawn. Also you start to think about the sandbox applications and you can run from flatpack and think, well, you're saying that these current applications that you run in your system are insecure from the point of view that they can have access to anything that your user can have access to. So that means that your flatpack applications suddenly sound like maybe a safer option than what the distribution is shipping. It's just kind of interesting. I mean at this stage maybe that's not the case, but there's certainly the potential for those applications to be more secure. Also, I guess the kind of reason for having the runtime as this kind of single blob is to kind of remove the problem that you have with a distribution where you have many, many packages where just one of those packages, if it's updated on a different schedule to something else, if someone doesn't realise it in the compatibility, can cause all sorts of problems when someone updates that maybe in a way that the person who was offering a package update didn't expect. And with an application runtime you can update the runtime one piece and test the whole thing against a version. It's less because of that. Again, this is more a question of how the runtime do get updated in the end. Do you have one runtime that gets updated when there's a GNOME release and then maybe when there's the next stable GNOME release and that's the only time you update it? Do you update it a lot into the future, like with Fedora where you have a one-year lifecycle for each release and you have stuff pushed out right until the end of that lifecycle, you have kernel updates, all sorts of low-level updates that can break things. Will we do the same with runtimes or will they be more stable? We're hoping they're more stable, but it's not quite clear. So initially when I talked about this and submitted the talk pros I'll use the word in atomic workstation. That's, I guess, slightly confusing because atomic, when people talk about Fedora, probably they have an idea that this is for cloud instances or servers or something and they maybe have an idea in their mind of what atomic is. And although OS tree, which is what I'm now sort of using in my descriptions of it, is the underlying technology in atomic. It's probably nice to maybe not stomp on the atomic name. It's kind of a brand that's a little bit far to remove from what the desktop's doing. So an OS tree workstation is going to use OS tree for versioning file system. If you're not familiar with OS tree at all it's basically a bit like Git for file systems. Specifically bootable file system trees. So you can imagine that you would have a commit per package update. So one of the packages that's in that that OS tree repository, which is what you call that would get updated and you'd have a new commit. Or you could push load and load this up into it and have a new commit as well. It would be some much bigger, more all-encompassing update. Something that's useful from the Fedora side is RPM OS tree. RPM slurps all the content out of those and commits those to an OS tree repository. It does those not on the client system but on the server. So it's a server-side composed process basically. That's kind of nice. It means that you're not actually running any RPM stuff on the host system. So you're probably familiar with the fact that when you saw an RPM you have that you can run a post install or uninstall all the various other stages during the install process. They run with root privileges. They can do pretty much anything. Obviously there are some restrictions. But and directly for stuff that's in Fedora they have to package and guidelines and things like that. But it's kind of difficult to enforce a lot of that. RPM OS tree enforces it from the point of view of they're not run on the client system at all. They're only run on the composed process and some of them are not run depending on if you're using package layering. Like the postscripts, they're just not run. There's going to be some changes there because the certain things like vagrant installs to something like slash, slash, slash, vagrant and some other stuff. That's done in the postscript. That just doesn't work because you have to do all that yourself. So we need to close the gap there to make that actually work. The other thing is that most of the host system is actually read-only. Pretty much everything got moved into slash user and that is actually mounted read-only. And that means that certain things got moved around so they're not there. It also means that things like the RPM database is read-only. There are some hacks that you can do around that which is this package layering that has recently been implemented. That means that you can take a repository that exists such as say the workstation repository which has the workstation package set and you can say, well, I want to install Enax for him or like my favourite text editor or something and I want that to be on the host and available. You can use package layering to inject that RPM or whatever RPMs you want into your system locally not on the server side. Very, very cool. Not quite finished yet the support but as a sort of early test it's a really impressive bit of technology. A lot of things still especially in Fedora land kind of expect that there to be an RPM database there and also I think that Colin moved it there and made it read-only just so that things could query it and say like, which version of this do I have so that you can say, oh well, I've got this bug fix or I haven't. I think you can still get the change logs I think you can still pull those out if you've actually changed in this update you can get that sort of thing out. Beyond that being able to query that is very important so for a movie it would be great. Right, we even have a very small manifesto of this version of the OS 3 much as this one does today. Right, but that's changing the API. Yeah, yeah, yeah. Tradition. Well, it's less tradition and more it doesn't cost too much. Depends how you want to call it. So that means that for the base OS for what you provide as the workstation it's going to be using OS 3 and RPM OS 3 to actually make that image in terms of composing that on the status site. We think that you're going to be using flat pack for the applications in the future. You can already do it now and there are a lot of projects that are jumping on the flat pack bandwagon and creating these bundles. A lot of the free software projects things like GIM and Firefox and a few other things are really jumping into this because it's quite cool from their point of view. It's traditionally even really difficult for them to ship development releases out to people because they have to make packages for each distribution they have to test that it works on all the different versions that are available it's kind of a drag and if they can use one tool to make a package that works most places, maybe not all places but most places that's kind of a compelling argument for them and it's also good for the users because they get to test out the software in a way that won't affect the rest of their system won't impact it in the slices so good for both sides there. It's kind of seamless for users as well because GIM software supports it already I think the .flat pack except file extension or there's definitely a live type so if you download one of these bundles right now in Fedora in 24 I think it will actually work and install it directly which is very cool and also that you can actually have it set up so that will then update that it depends on the person who bundled it as to whether they've set up the repository correctly but you can download a single file bundle and then that will be updateable in the future the current package says I'm sorry I want to say that it doesn't actually work in Dora 24 now but we are planning on backporting it that's cool excellent I'm sure Richard will talk more about it tomorrow so the current package set is taken from the current Fedora workstation package set there are some problems there like the fact that the groups that's taken from cons isn't actually supported by the current tooling I'll talk a bit more about that later there is kind of an argument then that maybe we need to change the current package set maybe it doesn't make quite as much sense for a workstation product that's using OS3 but that's a good discussion that we can have later as well so how does this improve things for users and developers compared to a current system one of the cool things is that you can do an atomic upgrade of the whole file system so what does that mean it means that if you're doing say a distribution upgrade and you have however many thousand packages that you're installing all of those packages like post-in scripts everything there has to succeed say you're updating something like the SELINX policy and that changes halfway through in a way that cause problems I know there's been discussion about this recently where what happens if you update the policy and it forbids something that you're doing as part of the upgrade process you can run into problems there the idea with an atomic upgrade is that this is all done on the server side at once in one single transaction that either succeeds or fails cleanly it's cool and it does actually work pretty well you can still run into problems and it does also mean that if you have one package that blocks the compose like Librofus is doing at the moment in Rohide and has been doing for a week or two then you can't easily get around that it's just frustrating but I'll give you that it's kind of nice to separate these two processes the server composing and the actual update process on the client side what you basically do is you set up a load of RPMOs and it sets up a load of hard links and then on the next reboot it switches over to that new system again this means that it's a bit different from the way the current system works so currently you can just upgrade one package using dnf oh I really want to do this now you can just change a few things there if you're not running that package at the time it'll probably work and you might get away with it known software recently I guess since it's been in existence has mostly forced you to do reboots for package upgrades and this is the same like it will force you to do reboots from the point of view of that you won't get a new system until you reboot there's no halfway has there at the moment a mostly read-only root file system there are going to be a lot of pieces of software and a lot of users who are very used to doing things in a certain way and you are going to maybe find out that those things don't work or at least they're going to have to modify the way that they work if they want to use them on something that's using OS tree for the base OS that's going to be fun so that's kind of from a level perspective how much of an improvement is it in terms of actually deploying applications I think it's really good from a developer's point of view because depending on what sort of software you maintain if you're maintaining an application that normal users want to use you're kind of constrained by distributions and their update processes and their life cycles in terms of getting an application out to users so if you're targeting an enterprise distribution or some long term support distribution then you might have to wait several years in order to get an application update to all those kinds of users even if it fixes a bug or has some really important new feature that those users need sure you can have things like copper or PPAs and things to get that out there but you've then got to do that packaging work yourself in multiple places and those users have to trust all of those places it's difficult, you can GPG sign packages and things but it's kind of a fragmented way of doing things because distributions fragmented way of doing things also that's just like getting fixes out there if you want to let users test your development version that's something that distributions are going to be even less likely to want to put in there unless maybe you're running raw hide and you expect that in the next life cycle that application is going to be updated you don't generally put something that's like equivocally unstable into raw hide that's what you shouldn't be doing because that's not what it's for but if you have flat pack application, someone can always basically stay on that development channel and always be testing latest version and reporting bugs to you if that's what they want to do give the user a bit more choice, a bit more power there to actually get what they want if you ever used flat pack to actually bundle an application it's pretty easy actually if your application is kind of a relatively standard thing it's mostly configure make install or see make equivalence or various other equivalence in doing this there are some changes that are required but from the application build point of view it's really very little, it's generally renaming maybe a desktop file and some prefixes, there's not much complication there there'll be a workshop at Gwadech which is next week about this where I'll be helping Alex to talk people through bundling an application for the first time so be very welcome to come along and find out on the material on the website afterwards there's actually a trust model as well flat pack uses GPG signatures of repositories and the individual who actually commits I think so it's actually kind of easier to place trust in those binaries whether that will work once things get a bit bigger is a bit difficult to know because obviously something like GPG is really kind of a way of trust model so it's very easy for the user to say okay this thing signed by this GPG this thing but then if that uses a runtime from somewhere else that uses a different GPG key do they trust that thing as well do they implicitly trust a known runtime because that's what we want to provide as a default we'll see another fun question to figure out on this one it's also nice from the developers point of view because a lot of distributions will either intentionally or just to make it work they will patch things downstream and those patches maybe will never make their way upstream it used to be a problem because I'm distributed to more than others and obviously I know like one example where this was a problem was on Ubuntu with the scroll bars that they used for GTK this was a problem for a long time and there had to be this huge like black list and white list of applications that wouldn't work with this if you as an application developer want to know that your application is going to work then you would try to reduce your testing matrix as much as possible you want to test with a known configuration and know that it's going to work for other people on that configuration you can do that with something like Flatback because you have your runtime and you can be sure that if someone else is using the same version of runtime they're going to have the same experience that's got to be helpful for testing and QA it's got to be good for the users because they're getting what they expect to see based on the screensets you're providing obviously there are some slight holes there because the runtime depends on the host system and there can be differences in the host system again, we'll see how that goes so if you want to do this yourself then how would you go about it I'm just going to throw some random commands on the screen that you can either copy down or you can completely ignore that's fine it's a pretty simple process basically you initialise an I'm an OX tree repository just like a git repository really and you use RPMOX tree with a package manifest and a few other configuration directives and you say slurp all of these packages and put them into this OX tree repository please that link to the bottom is a configuration that I've put up it currently uses the workstation package set as a base it's really too much else it's very very basic at this time but I'm running a VM on my system using that and it works in the way that the workstation works it's the same package set basically it's very very similar pretty much everything works with caveats of having mostly a read-only file system and things like that you could go into great depths about all the configuration directives that you have in this package manifest and there are a lot of them but it's not so interesting for users and hopefully this sort of thing is hidden from so this is something that really it should basically take the package list from comps and the user shouldn't really have to think about it too much you can actually convert an existing system to use this so that there are a few different ways that you can actually test this stuff out you can basically initialise your current re-file system as something that you can then pull all this OX tree repository into and again there are some commands there that will help you do that that's what you want to do it's pretty ugly at this stage in terms of competing an existing system to do it again it works but it's pretty low level at the moment we don't envision that people are actually going to be doing this we hope that they're going to be using an installer in the future and it has been some work on that which I'll talk about next there are quite a few different ways to create an installer the way it's done encode you at the moment it uses Lorax which is a tool that Anaconda uses to bundle everything up and put that on an ISO image there's another tool that goes on top of RPMOS tree so you've got OS tree, RPMOS tree and then RPMOS tree toolbox and that's another set of scripts on top and you can do various things with that so one of the things that you can do is you can create ISO images from your OS tree repository you've got exactly the same package information you've got exactly the same file system you've additionally got that on the installer, I say it's not a vertsing RPMOS tree toolbox I don't think is what Koji uses to make the Fedoro atomic images look at the file ones but it does use Lorax so there's going to be some fun there trying to get those things all to squash together and actually working he's done most of the preliminary work there he's got some working installer ISO images that he's doing on a sort of continuous integration basis you can download them from that link there he's also got a fork of the OS tree configuration for the workstation that I linked to earlier so this link at the bottom here if you go there you'll find that there's a fork that he's got where he's been working on doing the continuous integration with that and working on the installer with that so I'd definitely encourage if you're interested in contributing to look at those and see whether you find anything interesting there and whether you want to help it's not really clear what message we're definitely going to use we want to keep Anaconda as an installer because we don't particularly want to maintain our own installer around Anaconda and does what we want maybe does some stuff we don't want as well but that's not the point in this case so I don't know maybe there will be some different way to install but we think that's how it's going to work so I bet this is going to be great and it's going to be the future but how is it right now well if you were to use it right now then what you'll quickly find is that some applications are available as flat-pack bundles most are not and that isn't actually too surprising because really as a full-fledged project it's only been released in the federal 24 timeframe so you wouldn't expect it to have taken over the world quite yet but there is going to be more support and as more people install later the later versions of Fedora and other distributions more people are going to be exposed to it and more people are going to be able to use it so I think the number of bundles is going to increase definitely GNOME is running a continuous integration project where they're producing bundles for all the applications that are in the GNOME desktop and that's very easy because we use this build API concept that Convolters came up with which is basically configure, make install as a standard that your application is expected to do from the source title so that's something that should be pretty easy for others to jump on as well because that's what most applications already do if you went to Christian's talk you'll have heard a little bit about portals so those are the the bits that sit in between the sandbox and the host system where you can essentially poke holes in the sandbox and make it easier for an application to do certain things like for instance if it wants to access some hardware like a webcam normally it wouldn't be able to have more access to the device node because as soon as you give an application more access to a device node it can basically do whatever it wants for that device and webcams and joysticks and things you can do some pretty scary things to those if given the chance so we don't really want to give applications access so we want to have something in between that can proxy that access and their interfaces are being written now and those interfaces of APIs to actually access various bits of the system there's a lot of development there they are most definitely nowhere near finished it's coming along quite quickly we already have basic things like file chooses I think there's a printing proxy that sits in the middle as well I think there's an audio one like access to the microphone access to the sound I think probably we'll have people to put out wherever the sound they want but they can't use the microphone because the microphone is something that you can eavesdrop on users if you do that but if you have just outputting sounds you kind of expect an application to want to do that so a lot of that isn't even necessarily just the API or the interface for that it's actually figuring out what is a sensible default luckily other distributions and operating systems that really blaze that trail for us so you're kind of used to mobile platforms you know you install an application that says it needs these permissions we're not doing that we're doing sensible defaults so that the user can run something for the first time hopefully not have a permissions dialogue unless it's something like a webcam or something potentially where the application can eavesdrop on the user but a lot of that is still to be final the kind of sound boxing features that it uses like user namespaces they are not heavily tested from the point of view of they haven't necessarily been used in this way before so I think only recently like a month or two ago there was some vulnerability found in user namespaces that was critical I anticipate there will be lots more of those vulnerabilities found and lots more of things where people say this doesn't work in a way that it would advertise of course the only way we can actually improve these things is by getting more people to use them and at this stage we're not hyping a sandboxing capability to say it's going to solve all your problems overnight it's not making sure that it gets there also it depends on a system of user sessions if therefore you want to run flat packs on rail 7 you will find that you can't do that at this stage I'm going to be working on that support soon because basically all it does is it depends on setting up C groups in a certain way so that you put your running application inside a C group that's really all it needs to the system of user session for just to find the application inside a C group so that will also be applicable to other systems where they're not using system of user sessions but it's coming we know about it and it's being worked on the sandboxing doesn't rely on user namespaces that has a security capability I mean user namespaces and POD namespaces are one of the things that it's doing but also it's using POD namespaces so it's whatever bubblewrap is doing as well I can't remember which bits it's using for POD namespaces it's just one of the array of things that it's doing and eventually we're going to add SE Linux some form SE Linux support I don't think there's anything there at the moment it's like a scaler to a level of support there so I don't know what's happening with that right yet which is what I didn't mention there are probably plenty of other things that are problems that we haven't thought of yet but going back to the problems with an OSG workstation where you might not be expecting these things so you don't have a writable slash user that means things like alternatives don't work so if you want to switch between VI and VM or like Emax and Xemax and stuff like that I think it's normally done with basically a simlink that you change that doesn't work and a lot of those things suddenly aren't going to work in the same way so we either need to come up with new ways to do those things or say inclusively no they're not going to work, they're not going to be supported currently the package layering is very very rough but that's just because the support has just been added so if you want to add extra packages that are currently RPMs and not flat packs to your system things like the RPMs postscripts don't run I know that Beigwyn is an example of this what he already mentioned some people probably won't like the fact that you have to reboot switch into a new system I don't see that as a massive problem because it means that the package upgrade process is so much quicker and so much more reducible that you can also roll back to the previous version if the new version doesn't work very very easily but I mean some people will say well I want to be able to do it live I don't think that's going to be supportive but the policy Colin really likes to complain about the eti-learns policy breaking things and I mean I guess it does because inevitably you get some package updated that doesn't have an updated eti-learns policy change available it's not like eti-learns as a problem it's just that you know you're constantly playing catch up with the policy it's not really no estri problem let's say it's more just a problem that it exposes there are going to be plenty more surprises and we just need to get more users and more testers and more people contributing to really sort of reduce those numbers but it's already definitely usable I mean Patrick is going to be giving a talk tomorrow I think where he's using it on his home system so he'll be sort of more concerned about the implications the problem with actually integrating these things into the Fedora processes is that currently Koji's really really good at spitting out RPMs and surprise surprise with this new technology it's not really very good at spitting out flat pack bundles that's not again not a fault of Koji it's just that no one's actually written the support yet to do that and also we're not quite clear on what Koji's spitting out flat pack bundles is going to mean and do we want to also spit out run time we're not really sure yet as I said there isn't an easy way to produce an installer right now the stuff that Colin's worked on is a crazy amount of different configuration files to massage a lot of the stuff to actually produce an installer it's only complicated because that's just the way it is at the moment we're not really sure how that's going to turn out but there is an installer that does work for the coming time of I guess for the year 26 because we're going to miss the 25 I think again I talked about RPMO actually not taking cons so not taking our existing way that we define packages as input for the package manifest that needs I think some lipid support or something and I think Patrick has a workaround for that that he's going to talk about during his talk there are undoubtedly other things where the current floor and protractor doesn't do what we want it's not because the floor and protractor sucks it's just because we haven't written a support for it yet so there are some resources there for the bits that we're using there isn't really any definitive resource for like the RSV workstation effort but the desktop made in this is probably the best place to go I hang out in the Fedora workstation channel on FreeNode as well as the Flaphack channel and GameOS on GimpNet so you can come and ping me there I'm Amiga Dave on IFC and everywhere else Colin Walters has been doing a lot of the underlying OS3 work obviously it's kind of his creation and he also okays me to do some workstation work when he has time but he's doing a million other things so I don't know whether he'll be like heavily involved in the future or whether it's just another thing that he'll do a little bit of work on here and there so I'm open to the floor what about infrastructure infrastructure to build the Flaphacks and also distributing them because it's kind of crude actually and also how to integrate this in distribution like there we want to be able maybe in Flora to be able to build the Flaphacks and distribute them and for instance we can use our spec files and translate them to actual Flaphacks and it's technically possible so right now RPM OS3 that's basically what it does you take some RPM some bind me up that you've got from the spec file and your table and you create a commit in an OS3 repository containing that information in terms of Flaphack bundles it's not quite clear whether an application will actually provide this so whether an application in its source code is not clear whether it's going to as standard provide these are what you need this is the Flaphack manifest basically it might be for game applications we have a repository of these things that's separate from the applications themselves so we're not really sure what other upstreams are going to do and also then you have this problem with the run times where those are something that's like at the moment quite separate and off to the side and there isn't a standard way of building those Flaphack does have a tool called Flaphack Builder and it's a lot easier which is why you have this manifest format and various other bits of metadata that are sort of standardised but yeah it's not clear like on the server side how this is all going to fit together yet and we do need to figure that out there are various bits that it also pulls together like I think it pulls together some app stream stuff which is also done mostly server side at the moment in Fedora I think but then that's all separate that's all just like inside the bundle and everything is separate from the rest of the district there are plans to write a traditional thing we're creating platform models out of RPMs spec files and just what we're doing this year and still very much under discussion about how we're going to do it but we certainly know that not every upstream is going to immediately try the Flaphack specification spec files and definitions of a whole lot of applications in Fedora to really get the content that we're going to need to have mostly small applications that we can write in Fedora available on Flaphack so the idea which is still under discussion is to have a co-g to be able to take a spec file and rebuild it as a Flaphack and have a process of Flaphack that we're creating out of the Fedora and we stand by this enough that we adapt it to work in a special way with Flaphack so it will give you the same optionality you have now there seems to be a lot of crossover there between that and the same docket languages which we're also supporting in Flora and the infrastructure and stuff like that and the ability to basically take a manifest of a list of RPMs and generate a different artifact from Composer and so from an infrastructure and a release engineering and a process point of view there can be overlap with a lot of that because it's a container with a bunch of binaries in it whether it's like wrapped in a docker thing or a Flaphack thing or any of the other dozen different formats that are out there the process isn't really that different in the creation of the artifact and there's already made a huge amount of work to integrate that in the diskit and coergy and various other built systems so that it would be if you could go down that route you would save yourself vast amounts of time and vast amounts of other people's infrastructure and release engineering time to be able to produce the same output it's pretty clear that we can't gain real engineering if we don't think there's much costal control of the animals there there are some differences with Flaphack that's one of the main things that actually are going to have to be rebuilt to make the Flaphack out of them because where instead of having user bin things to do at the bin things to be able to save that picture and build it back you can just do that so that's an experience you can relocate the files but they may not work when they're relocated and they give the most impact on the whole world that they should just move from user app bin things to do with it for you to share or escape so it's a lot of engineering to make that that's the correct location so it's going to be built in the easiest way so being able to take the talk here it's going to discuss this more how we can really leverage it so that's it so we're not going to what we're trying to find the way to do is to move at least up to the list of areas involved in the name if you have any other questions thank you very much