 Παρακολουθούμε, εσύ. Θα συμβαίνω τις ευκαιρήσεις. Έχουμε πέρα από 10 παιδίες και έχουμε σχεδιαστά τη δημοσιασία από έναν ώρα. Έχουμε 6 μήνες πέρα από παιδίες, όπως είμαστε μιλότητας στην συγχερία. Παρακολουθούμε. Η πρώτη παιδίες είναι Μιρουσλαυσουκί. Παρακολουθούμε. Τον next speaker is Juan Argoona, RPM, PI installer. Είναι εύκολο. Είναι αλικοφείς, πρωτε. Παρακολουθούμε. Εκκλείτε, ναι. Γεια καλύτερα. Ποιος δεν γνώμη. Είμαι η Μιρουσλαυσουκί. Θα εγγώ να ξεκινώ τις εξεγριμένες διεθνότητες στην Μοκ. Υπάρχουν δικαίσεις νέες παιδίες, πρώτα από όλοι, υπάρχουν παιδίες Dash-Dash Force Arc, που θα σας δημιουργήσουν για να δημιουργήσουν δικαίσεις της αρχιταξίας. Λοιπόν, αν θέλεις to write Moxshell Dash-Dash Force Arc PPC64LE, θα φτιάχνεις από να διδιωθεί μικρή μεταγωνία, παιδίες Παχ-Υ-Μ-Κ-Κ-Ρ-Α-Α-Ν-Α-Ο-Α-Α-Ι-Α-Α-Μ-Ο-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Ι-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α-Α και μετά από κυβέμου. Έχεις να δοκιμάσεις κυβέμου's στρατηκή. Και η αμολογία είναι έργησης. Η αμολογία είναι δύο. Λοιπόν, αν θέλεις να κάνεις κάτι για τις αρχιτεκτές, χρησιμοποιήσεις αυτό. Είναι πολύ πιο σκοπό να δημιουργήσεις έναν νέο αδύνατος αδύνατος, όχι αν χρειαστεί να δοκιμάσεις κάτι. Η αδύνατος αδύνατος αδύνατος είναι η Βούτσραπ. Είναι στην ΜοΚ για έναν χρόνο. Είναι ακόμα δημιουργημένος, γιατί υπάρχουν κάποιες αδύνατος, αλλά είναι καλό να δείξεις ότι είναι εκεί. Η αδύνατος έρχεται στις, αν είσαι στο, let's say, ρελ6 ή ρελ7 και θέλεις να δεις ροχή, ρελ7 δεν έχεις μπροστά. Γιατί υπάρχει κάποιες πράγματα που θα δεις, αλλά η Βούτσραπ είναι μια καλή στιγμή, γιατί θα δημιουργήσει πολύ μικρό χρόνο, που δημιουργεί μόνο πράγματα, ρπ, και από αυτήν την χρόνη, θα δημιουργήσουμε την τελευταία χρόνη, με την Βούτσραπ 28 ή ροχή. Έτσι, θα δημιουργήσουμε την τελευταία χρόνη με την τελευταία χρόνη, από την αδύνατος που θα δεις να δεις. Τώρα είναι πολύ καλή, και στις τώρα είναι η μόνη καλή στις τρότες, γιατί μπορείτε να δείξεις την Βούτσραπ 28, ρελ7, γιατί σε κάποιες πράγματα, που έρχεται στις μόνος, αν είσαι να δεις κάποια πράγματα, που είναι χρησιμοποιημένη από ρπ, dnf, στις πράγματα, etc. Δεν προσπαθείτε να χρησιμοποιήσετε κάποια πράγματα, ή κάποια νέα ρπ, δημιουργήσετε να δείξετε αυτήν την πράγματα, δεν θα βρίσκετε να δείξετε, φέδωρα 32 ή κάτι like that. Στην πράγματι, please try to not use any fancy rpm feature in those minimal rpms, but otherwise everywhere you can use any feature as you like, and BOOTSRA will be solution for you to build that package on any supported platform. That's all, thank you. So good morning everyone. My name is Jun Arga, so I have six minutes for my presentation. So today what I want to share for you is about one product recently I developed. That is an installer to install rpm Python binding in any Python environment. I want to ask you how many people you know rpm Python binding, raise your hands. Okay, that's cool. So actually for example Koji or some Fedo package command, rpackage command is using rpm Python binding. The main purpose for the binding is to purpose rpm spec file. But some challenging is people want, sometimes want to use rpm Python binding on for example virtual environment to develop your project using rpm Python binding. And this installer some solves the problem. And some people want to use rpm Python binding on for example Python 3.5, not 3.6. rpm Python binding is provided by rpm for Python 3.6, but people want to use for different versions. And on CentOS, the Python binding for Python 3 is not provided, but still people want to use and my installer to solve this problem. Oh, still we have three minutes. And the main architecture is the main benefit is if your project needs rpm Python binding as a runtime dependency, your project maybe cannot upload to 5.5 remote server, right? Because rpm Python binding is not published as a 5.5 package, but using this installer you can publish your package to 5.5 server, you can publish. And actually the Koji is using this technology so you can publish your Python application to 5.5 server. So main architecture is some, for example, PIP install rpm-py-installer. So in this timing the application will get source code of rpm and only build a part of Python binding, not entire rpm. So that is the main architecture. So that enables to build rpm Python binding on any version of the Python. So yeah, you can use it and still I have two minutes. Okay, and the name is rpm-py-installer. rpm-py-installer, just remember the name. And you can search this name on GitHub or PyPy website. So any questions? Two minutes, I see. Hmm, hmm. No, no, no. Okay, so it's done. Thank you very much. Thank you. The next speaker is Luca Bruno. His talk is Ignition First Boot CoreOS Provisioning. I thought that was somebody else before, but okay. Hi, my name is Luca. I used to work at CoreOS before. And I'm going to talk to you about Ignition, which is something that we did as a new project, let's say, to solve a few issues. And I'm going to let you guess what are the design. This is the data, let's say. What are the design details or the design goals that we are aiming for? So first, we were running CloudInit at some point. And the problem is we want to configure the machine, the cloud machine or the bare metal machine when they are starting up for the first time. But we have a problem with CloudInit. And I'm going to let you guess you what are these problems like. In which language is CloudInit written? Have you ever used CloudInit first? How many of you have? A few. In which language is CloudInit written? Okay. What do you need in order to run a Python binary? And? Okay. And then just the interpreter, nothing else? SunLibraries, cool. So where are the interpreters at library living in this machine? Ha ha, cool. So that means that now you want to package and manage Python interpreters and Python libraries and tell people, hey, this is the library that we provide you. Cool. Problem is in our operating system, we don't have an interpreter and Python libraries. So, not good. And then the other thing is, what do you configure with CloudInit? Lot of stuff, including, I don't know, network configuration, right? Cool. Where is the user data for CloudInit coming from? Somewhere outside in the network. So how do we do it? Well, you run some network, then you get the configuration, then you reconfigure the network and then you hope that there is nothing else racing with you in order to do something with the network because then it's like, well, there is a network, now there is not anymore, now there is the network, it's configured in a different way. And this is exactly why we wrote Ignition. So Ignition is basically a replacement for CloudInit and also a bit for Kickstart, Anaconda, same kind of ideas. And the main difference is that it's not written in Python, it's written in Go, so it's just a single binary. It doesn't allow you to do a lot of things in a non declarative way, so you cannot run any arbitrary comments or whatever. Instead, it is supposed to be a machine interface. So you generate some JSON according to some JSON scheme and that's a declarative configuration on how does the machine looks like once it is provisioned. And then Ignition just takes it from somewhere, the network or whatever and it does all the configuration before the machine boots in the Initram FS, before the real switch to the real root FS. And so at this point, we have a deterministic process where the machine boots, it does some Ignition configuration. If that declarative configuration is actually applied, then the machine boots. If that declarative configuration is not valid or cannot be applied for any reason, then the machine doesn't proceed with a normal boot and it is stuck at the Initram FS with some errors that you can inspect. And after some time, it's gonna reboot and try to reprovision again. And that's the main idea for Ignition. That's the main point why we did it and that's the main point why we have this not invented ER syndrome, which in this case is more like we have different design goals and different design details and we wanted to actually solve tickets and bug reports that people were reporting to us. And that's it. And speaking about Ignition because that's actually one of the components that we like that we want to carry over from container Linux and that's gonna be part of our how-to provision Fedora-Coroas in the future. Questions? So we don't have any performance goal at all, but the goal that we have is like race-free and correctness provisioning. So it's like, as a side effect, it may be faster. It depends on what you're doing. The thing is like with Ignition you can basically provision whatever. So if you want to fetch a 3GB files before the machine boots, it's gonna be slow depending on the network but that's the details. Other question? So... Somebody cut me a six minutes, please. I'm not counting. So... Yeah, so the question is like can we port Ignition to other random distribution? So the answer is yes and no. The yes part is yes, just a binary, a static binary that takes some JSON and does some operations. So that part, yes, like packaging it for Fedora was not too hard. The hard part is that it was really designed in order to fit into the container Linux bootstrapping process that assumes a lot of things are like read-only, there are no packages, there are no many things and you can boot, do some stuff and then continue with the boot. And that's the hard part. So like the integration with Dracat and the NetramFS for every single distribution is probably gonna look different depending on the real distribution. Apart from that, no, just like go binary that takes JSON and does some stuff. And that's it. Like right now, in Fedora CoreS, we don't do any... So Ignition can do this partitioning and this manipulation. But right now, in Fedora CoreS, I think we still don't do that because that assumes the old container Linux setup where you have two user partition, a read-write root file system and so on and so forth. So that's gonna take a bit more time, I guess. Other question? Last one? No. I cannot answer completely to that. It supports some of the same providers, some of the same methods of fetching it, mostly like the remote endpoint metadata and then depending on specific providers, some local, like local disk or a local firmware configuration or whatever. I didn't check all the providers in CloudIn, so I don't know. But we support everything for like AWS to burn metal to GCP to all the other clouds. One dark corner is virtual box and that's something that we are gonna work on. Apart from that, it's mostly okay everywhere. That's it. Thank you very much. Thank you, Luca. The next speaker is Praven Kunoff with Fedorizer for Mind. Yep. I have some slides for that. So my name is Praven Kumar. I work for Red Hat in a developer tools group and I'm talking about the project which I work usually on Minisift. This is the way to provision an OpenSift cluster locally with the single node cluster and I'll talk about like why we are doing it and why we are using Fedora. So Minisift, we say it's run OpenSift locally. This is a basic architecture how Minisift actually works. So we use something called Lib Machine and Lib Machine is actually managed the VM lifecycle for the Minisift and using the Minisift we actually create a VM using the native hypervisor on the system like for Linux we use KVM Windows, we use Hyper-V and for the Mac we use XI and then we use the OpenSift client binary which actually internal do the OC cluster up and then it OC cluster up actually deploy the single node OpenSift cluster for us. We already have some existing ISO for try out and in the VM. So we have sent to us we have something called Boot to Docker ISO and we had a Minicube ISO which we deprecated because it was not working as we expected to work. So we deprecated it ISO. Why we want to use the Fedora? Very simple way that now there are a lot of different container technology which coming into the picture like its portman is there Cryo is there, Builder is there and we also wanted to use the same thing. Right now OC cluster up doesn't have the option to actually select the different container runtime still use the Docker as a default but even then if a user want to like try out some command with the portman or some command with the cryo he can do the he can log into the SS to the VM which is very easy for the Minisift because it's already there you just do Minisift SSS and then try out all the commands. Okay, somebody will say that yeah I'm already using Fedora then what's the use for it because I can directly then OC cluster up on my own laptop and then I have the local OpenShift cluster up and running. So yes you can do that. It's good if you are only try to use only single OpenShift version like if I want to use like OpenShift with 3.6.1 and tomorrow if 3.6.2 is released with OC cluster up I can only provision one single OpenShift cluster on my own laptop but if you are using Minisift what we have is something called profile so what it do is that each profile can have a different version of OpenShift so you can actually test your application across whatever the version you want to select then how to build that ISO which we are using so we use the Kickstart file the old way we already have a GitHub repository and I was having a talk with Mohan and we want to make it part of Fedora and then what we'll do is that once we have it then with the official way we can release that ISO but I have a locally built ISO myself so this is how it actually looks like if you do SSH you can check what's the Fedora version I'm using so if you don't believe on the screenshot I have to go to the terminal try it out but I don't know how much time we have but yeah and then what if I have some issue so we are on the free node called Minisift we have a mailing list, we have an issue tracker and we think that we have very good documentation around what we do but if you still think that there is some confusion around the documentation let us know we have some dedicated person for the documentation so that's it are we good on time? yes, one of the companies for questions ah ok so if you have any questions yeah sorry what? would you use these? yeah yeah so the default we use for the VM is like 2 GB of RAM and 20 GB of disk space but that actually depend on the application so some of the people who are actually using Minisift if they want to deploy the application which are like more resource hungry so they usually use like 4 GB or 6 GB of RAM and then but for the user who want to just play around the open shift to see how it works I think that's the best we have so this is the live ISO so the disk size right the disk there is a standard position and disk we create like 20 GB or whatever the user give that's what we do is that we mount in the live ISO so whatever written in the disk until unless you delete that VM that is there so you can stop and start and your application is up and running but if you do delete then it will be like nothing will be there so you have to but the good thing is that we also have something called host folder mount which we use SSFS so you can actually mount your host to the VM and have it like everything whatever you need for persistence will be there So the thing is that if you want to make it a Amazon image I think because the content is not different what the Fedora default Amazon image have so it's just doesn't make sense but what we want for the developer is that if they want to try out mini shift on your local host they can use that ISO and try it out I can talk to you Thank you, thanks Next one is Paul Frills Is he here? Nope, okay, let's skip that for now Jonathan with what is Yep, thank you Hey, can you guys hear me now? Sorry, I'm just going to try and make that a bit taller Okay, if any of you guys have been talking to me you'll know that right now the thing I've been working on is ZChunk and I'm looking out here and I think probably about half of you guys have had to bear the burden of listening to me go on and on and on about this So I just thought I'd do it as a quick lightning talk What is ZChunk? Why do you guys care? Most of you probably don't but let's get into it anyway and maybe some of you guys will think hey this is awesome ZChunk is a new compression format and your guys are like why on earth do we need a new one? We've got about 10 of them, right? Well the thing is ZChunk is not actually a new compression format It's reusing another compression format In fact it can use different compression formats but what it is, it is a method of compressing files where you split the files into independent chunks and the reason you do this is so that when you are wanting to download a new version of the file you are not stuck downloading the whole file again A ZChunk file can be created using the ZCK command in the same way that you might use the XZ command or GZIP command It can be decompressed using the UN-ZCK command in the same way GUNZIP or UN-XZ but there is an extra tool there called ZCK download that if you give it an older version of the file and you point it at the new file on the internet it will download only the chunks that have changed and this gets you depending on how big the differences are in the files this can get you some rather dramatic reduction in the amount that you download So why am I talking about this here? We are looking to ZChunk Fedora's metadata The metadata that you guys download every single time you run DNF update and you are looking at that beautifully long bar that goes and moves and moves and if you are like me trying to do this over 3G sometimes it is moving very very slowly using ZChunk you can go and download only the differences This was a feature we proposed for Fedora 29 The bad news is it will not be completely done for Fedora 29 My goal is to have the metadata generated for Fedora 29 and Fedora 30 There will be opt-in testing for people who are interested in risking their lives and their ability to update and assuming that everything works out well we can make it all in for Fedora 30 I have got implementation details but honestly I think at this point I will open it up for questions I ran some tests Basically the results are The ZChunk file has two parts A header that stores the check sums of the chunks and then the body which is the actual chunks themselves You have to download the header every single time So you are looking at a minimum like on primary.xml You are looking at a minimum of maybe 50-100k that you are going to download every single time On top of that it is the difference If you use it every single day you might be looking at an additional 50k whereas the normal primary.xml is maybe 3-4MB So drop that down to 100-200k including the header there If you are taking longer in between updates you are going to have to download a whole lot more In the back The chunk size varies We are using a couple of different ways of generating the chunk size With ZChunk you can manually end a chunk wherever the heck you want to The key thing is that you always do it in a consistent manner So in CreateRepo when we are ending the chunk we are ending the chunk at the end of every single package Well, actually every source we try and combine packages that have the same source RPM together into the same chunk because there is not much point in putting them in separate chunks We do use compression dictionaries which means the compression we get is still very, very good It normally beats GZIP by about 10% What does this mean? Okay, right now it means absolutely nothing I have a vision, I don't know if I have told you this but I have a vision Is there anybody who works on RPM here? Okay, please don't shoot me My vision right here my vision is to kill Delta RPMs with fire Okay, and instead make ZChunk a payload for RPMs There are some issues there because you have the information on the file system is uncompressed I have some ideas on ZChunk has the ability to do feature flags So I have some ideas on how you can still validate that the information you're getting off the file system is the same You're looking very suspicious at me right now and I don't blame you Yeah, I do have a vision for how this could completely destroy the need for ever using Delta RPMs again and I was the one who worked on getting them in originally and I love them, except I hate them So it's kind of a, yeah All it is is, you know when you're for create repo C when you're compressing the file you're just compressing the file but you're not in the whole eight and a half hour compose cycle it might take a little the compression level we're using it might take a little bit longer, another minute another two minutes Yeah, yeah, yeah So well, and if we could get rid of this it would be increasing the time it takes to create it would actually be moved to Koji because it's when Koji would create the RPM the compressing the RPM would take longer but the beauty of Z-Chunk is we don't have to know the old version Delta RPMs we have to know the old version and the new one, Delta in between them Z-Chunk it's up to the client to work out what it wants to grab Thank you Next one is Dan Horak Perfect, and after that is Floria then Marie and then Adam Hello, I guess you know me I'm Dan Horak working on the alternative architectures primarily on the IBM ones and the lightning talk is about using an overpower workstation or system as a workstation for daily use Yeah, the answer is yes you can use non-mainstream architecture as a daily workstation I personally use it with 6FCE there are still some issues with those 3D graphics needed for GNOME but yeah, the community is collecting the issues they are trying to find the fixes and get everything upstream so yeah, should work fine Actually the workstation or the Talos workstation that's based on the IBM or OpenPower or 9 CPUs is the result of the OpenPower Foundation effort it's based on reference design improved by the Raptor Computing Systems Company one of the goals they had it's not only to provide some non-mainstream architecture it was not the primary goal the primary goal was rather to have an owner-controllable system which means that the OpenPower system you have all servers on GitHub so you can compile it yourself really from the beginning for the Talos workstation or the Talos systems you also have sources for system FPGA which does some power-on stuff really to power up the system there is also a second computer the system which is the management one the BMC that also runs fully open source operating systems, open BMC also on GitHub so yeah it's here, it's usable there were some benchmarks done by the Forerunix guys they discovered some issues or performance issues I think primarily in the multimedia stuff and immediately after they published the results there were some other guys and teams who picked the challenge and started to work on improving the stuff for power so we should get even in these areas at least comparable performances on Intel or AMD systems yeah unfortunately my workshop wasn't accepted so I didn't brought my system with me so I cannot show it to you yeah it's probably all from me so any questions for what parts you mean for what parts of the architecture it's a IBM power it's a power pc of the server architecture you mean for your project or any project yeah there are some projects in progress we are working with the CentOS guys to start some infrastructure inside the CentOS CI to allow upstreams to run the CI on the infrastructure we have or we also have some other virtual machines that we can give developers access to and they can run their own instances of their CI systems they want so yeah definitely it's possible just talk to me or talk to CentOS guys we can definitely figure out some solution these architectures should be possible with the CentOS CI the other architecture we care about in Fedora it's a bit more difficult but we have also solution or we had a solution I think two weeks before and hopefully some next week it will be back because yeah the machine we used or the hyper was somehow died and they are reinstalling it so should be back again any more questions so thank you next one is Floria Festi So the question of what to do with the change log has come up a couple of times people have been annoyed of being doing change log in the spec file and then having to type it again into git and every time you try to have some changes and put it in another branch it also gives a huge merge conflict it's pain and I was always was in my back of my mind so what to do with that and but I didn't really figure out what could be done I mean yeah someone just write a script to create a change log then use include and don't bug RPM developers with your stupid details but it still kept in my mind and so I was under the shower and my brain cooled back to a working state and it made click and I realized the problem is actually not the change log itself the problem is all those merge conflicts and those merge conflicts are actually not really created by the change log they're actually created by the version actually the release number which is annoyingly getting changed every time in a different way and so I figured the only way to actually move the change log out of the spec file in a way that actually reduces all those merge conflicts is to move the release number out with it that's basically the only way to actually get rid of that and so I was wondering can this be done so I wrote a small python script and that basically does this and the idea is basically this you need like a couple of commands that you can put into the message log and the one is of course the default is take this message use the user, use the date create a change log entry out of this the next thing is well where do we get the version number actually from this the thing what you need is you need to need when the version change and then you just count the release numbers up from that so the package doesn't even have to do something, you just commit and it magically updates your release number this has the great thing that if you have a change grab it from here and put it there it will just increase the number and you don't have to care that they're actually not the same because they're not even in there so you need a way basically to tell the script when the version changes so it notes where to start counting and so I basically said we just whenever the change log says update to we just grab that, that's one solution you could of course also just parse the spec file but you parse the spec file which has pars missing which you're trying to create which probably works if you like be nice to it but I'm not quite sure yet what's the right thing to do there but they're basically the two options one is well leave the actually version in there and parse it out and get also and probably epoch to if you think about it a bit more there are probably two more features that are needed one is like ignore this message we really don't want to see it because it's just a rebuild or it's something where we put the wrong thing in there and there you need another thing that's basically saying well start generating the change log from this point on to the future so you basically can keep all the other stuff the nice thing is if you're using include you can basically keep the change log you already have put an include line on top and so basically only use the regenerated change log from that point on then there's so I had this working it kinda looks like it could be a thing there's one more problem and the problem is history basically yeah someone at some point will fuck up something and you don't want to go back and basically change all the history so the question is what to do with that and the obvious only obvious solution is well we misuse text to basically do the same type of commands which can be applied to changes later on we just put a tag on that says well ignore this message well actually we changed the version here so it could even be done for older for older history that we want to convert if we really want to which we probably don't so that's the idea it's not really there yet I mean it has a couple of problems obviously because you have spec files which are no longer contained but will probably break the build system everywhere which I don't care because I'm an RPM developer and I just don't care but so that's one way that can be done and it probably can be done without even new features in RPM right now there may be some things we can help talk to me that's basically it the next one is Marie budge's micro brainstorming she will tell you I think we I don't need to plug in hi I'm Marie Norden and I am the fedora badges design maintainer so really quick I just want to have a discussion with you guys I'm going to give you some prompts and then like we can talk about get some new ideas so what is something you do all the time for fedora that you want to get a badge for what is something you might be working on that you need more help or contributions for what are some areas or projects of fedora you think that we need more people working on in general are you working on anything new that we can create badges for and then any other just general ideas are also welcome but I ask you to be as specific as possible because I'm going to open the issues later and I might not know as much about that topic as you do so go ahead I'm going to take notes I thought we had a lot of tickets open for that but it wasn't possible is it possible now bugzilla hooking it up with fed message well ok so anything that was previously marked as not possible for bugzilla is now possible ok so I will take a revisit some of those issues or anyone else oh oh it's ok I will note that but that is not my job that's fair enough I'm going to take a note go ahead ok what about module builds so a series of building packages for module builds did I get that right a series of badges ok ok so that's not specific that's very general ok dusty ok rest group go link group sure do we have one for that I think there's a ticket open regarding silver blue but I don't know if it's for membership or being part of the group I'm sorry I didn't quite hear you can speak up I'm just going to take a note I'm going to have to look into that so converting python package from two to three ok so you ran a hundred dollar party ok that's a pretty good one alright discourse yeah so we have three new open tickets we made them yesterday alright anything else dusty ok no but I could still open a ticket discussed that's oh man that could get abused a little bit I mean it's a cool idea yeah I'm just going to mark that one as this is a bad idea so where would we put that like how would that ok yeah I like that you're in the running for XYZ badge you need this many more anything else I'm sensing that this is about to be over alright thanks ok and last but not least Adam Samarlik yes I'm using macOS to make Fedora better hey yeah I know that's a horrible title and I apologize and feel free to just shout boo but yes but sometimes I use a Mac and sometimes I'm just stuck with my Mac and I want to work on Fedora and I don't want for example carry two laptops or do whatever so I just use it to contribute and I'm also using on my Lenovo laptop Fedora Atomic Workstation which is now called Silverblue and I've noticed something very similar so on Silverblue I do everything in containers and on my Mac I do everything in containers and I somehow noticed that I don't care which one I use because it's the same I use the same commands the same workflows the same everything and then I was wondering we want to attract more contributors and like the first thing we want to say we don't want to say it's just reinstall your laptop I feel like that's a bit high bar for new people so if we can make it possible for people using Macs because there are many developers to contribute to Fedora and maybe onboard them and make them packaging or whatever because they can be contributing to server or anything else and then maybe switch them over so we can say hey if you switch the containers will run not natively so it will be faster for you or you can customize it but if you don't want to you don't have to so if we could as for example part of the Silverblue build a consistent experience for both but that's just an idea we could maybe work with and by the way funny thing if I'm doing a graphical design in Inkscape I always choose Fedora which is ironic because on Mac it's so slow it's almost unusable and Macs are usually seen as like a graphical laptop so no Fedora is the go to graphical laptop for me and I think that's all any questions or boos or boos if we build it for if you build it really nice for example build containers based on Fedora they might just choose Fedora just for this instead of other distributions available which are I have a question there so what I'm possibly proposing is kind of like hand wavy but the project Silverblue is about building container experiences for developers using Fedora so you do everything in containers that's an OS but can we build the exact same experience on Mac OS and Matiu don't hate me and call it Fedora so it might be just this was mostly about the command light tooling I don't think you need flat back on Mac because for example when I use my editor and that's VS code by the way using just like native installation on both or flat back on Fedora and just like native installation on the Mac so that just works fine and also it's not sometimes it's not about building new code but just making sure the one that exists works so for example I work on the Fedora documentation build pipeline and we made sure I made sure that it builds on both Fedora and Mac OS and that was just like a little tweak to make sure it works on both and now we can have people contributing from like much more user much more users so maybe I don't think it's a workflow it's just like the mindset of just doing everything in containers but yeah I could start maybe block series or something yeah that's a good point yeah cool yeah so if it was six minutes just to cut me off because I don't want to hold you there okay we grew cool if we have like few people interested in this yeah yeah yeah so if it was six minutes just to cut me off because I don't want to hold you there okay we grew cool if we have like few people interested then just like I don't know work on it think about and just make something happen that would be great thank you