 All right, so now it's easy. So we just, we need the project to exist somewhere. We need to tell people about it. We have to start making releases, maybe. We're probably gonna get bug reports. I don't want them, but people are probably gonna wanna send them. And more importantly, users need a way to complain because everyone does that. So this gets into the harder part of a software project which is the community aspect. So when you're making an open source project, yeah, you can just post code out there but if you're not working to get the community on board, it might go unnoticed and this does happen. So when you're working on this, yeah, you need to have an actual structure around the project. In this case, it's software. So your typical directory layout and stuff like that, pick a license. I chose GitHub to put it there just because we had other projects there and were established. So that helps with visibility. Also start demonstrating the workflow you want contributors to do. So it's really easy if you create a project on GitHub and you want other people to contribute. Well, if you own the project, you just push things to it. But try to get out of that habit. Instead, fork the repo and send yourself pull requests and review them and kinda play by your own rules. It might seem silly, but that helps paint that picture of how you want collaboration to work. I set up copper repos so that we could get automated builds. So all those tests, people would send stuff. So that was all the mechanics to get things happening, but I still need people to actually start contributing. And this is where you gotta go out and start talking about the project. And you need to, anytime someone's talking about RPM diff or asking a question, what is this thing about RPM inspect? I've never heard of it. It's easy to get frustrated when you're working on a project. You're like, how can someone not know about this? I've been talking about it. Well, the fact of the matter is we all get a lot of email. There's a lot going on. We can't be aware of every single project. So instead, you can view those as opportunities to explain to people what's going on and how what you're doing, you're trying to save them time and make their life easier. So that's what I did for like a couple of years is every time something would come up, I would say like, well, here's what I'm doing here. Oftentimes people would be really receptive to that. Thanks for explaining it. And then that would lead to them contributing either ideas or patches. And it just kind of grew from there. My manager did ask me to reach out to Suza early on to see if they were interested in collaborating with RPM inspect since they use RPM. And that was a very short conversation. They were absolutely 100% not interested. So I was like, okay, well, I tried. It was an odd conversation. I did try again about a year later and they were still 100% not interested. So maybe that will change at some point in the future. But yeah, just being aware and communicating, you start to sound like you're repeating yourself and you really are, because not everyone is going to see all of those conversations. You're gonna respond in bug reports. You're gonna respond on mailing lists. You're gonna use events like this to communicate, kind of give the same presentation over and over again. But that's how we learn about projects. And that's how you start getting people interested and participating and contributing. And it takes a lot of work. I say it's harder than the code and it really is. And like I said, eventually you start getting people aware and they start sending pull requests. I had some surprises to me. The Java maintainers at Red Hat were really interested in proving stuff in RPM Inspect. And I started getting pull requests from them. That was really kind of cool. The Anachek and Lib Abigail maintainers on the tools team were also really interested in receptive to bug reports and things like that. They even turned Anachek into Lib Anachek and gave it a public API, which was really nice. So you don't know what's gonna happen until you keep communicating that. And kind of going hand in hand is marketing. This is something I'm really bad at. And it ties into that community aspect there. And to me, this is really the hardest part. I know that I could be doing a better job of marketing the software and making the projects known to more people. I'm just really bad at that. I would rather stare at a core dump file than go and figure out how to make people aware of software. So this is always an area that I'm looking for help with and just ideas on how to improve it. I said I go and monitor mailing lists and bug reports and just have those individual conversations. Those are easier for me, but other projects may find it easier to do presentations, do YouTube recordings, something like that. I don't know, it really varies by project. Now, I've been working on this for a long time. It's largely in maintenance mode. It's stable, but there's still stuff to do. Documentation, number one on the list here. There's been a request, this actually came up in 2019, which was be able to extend RPM Inspect with Python plugins, which I think is really cool. I have a branch that I'm working on that on, but it's not required. And just kind of going through the other RFEs and stuff like that. So it's just at this point, I think it's gonna evolve and kind of live in maintenance mode until such time as something else comes along and replaces it. All right, so yeah, summary. So what did I do that I count as a win? And I gave these little subtitles here because I thought I would get some laughs, but it was a tough crowd. All right, so the first was it's a library. So I decided to implement the core functionality in this program. I said it's a command line program, but I shifted all that into a library. And the reason is I feel like down the road, there might be a need to, or it might be a desire to use that functionality, but have a different execution front end. So by splitting this out into a library, I'm trying to kind of prepare for that possibility. The vendor data packages, this was separating out the data from the code. And there's RPM Inspect Data Fedora, RPM Inspect Data CentOS, and RPM Inspect Data Red Hat. And all those packages contain the rules. So when you run RPM Inspect, you run it according to a set of vendor rules. The biggest thing there is all of that's out of the code. And more importantly, I don't have to own it. So that's information that can be owned by the vendor and maintained separately than the program. We're doing that now for Red Hat, Fedora and CentOS, I still kind of have a hand in maintaining those packages, but there are people that also help with that. The last one here, I chose C as the implementation language. This is where I thought people would laugh, but there's a reason for that is the core functionality here relies on LibRPM, which is in C. The development and debugging tools for C are more mature than other languages. While I would have loved to explore something like Rust or another language for this project, the fact of the matter is I needed it to be done fast and be reliable and stable, and I know C well, and I don't know those languages. So I didn't wanna tack on, oh, I should also play around with another language. So I get the advantage of being able to use all these other libraries that exist in C and I can also keep the program itself really small. All right, so fails. Documentation, so like everyone, I am saving the documentation until it's done and stable, so that means it'll never be written. What I should have been doing is the same thing I was requiring with code commits, which is I would also write the test cases that would go along with the code commits. I should have also done the documentation, the inline documentation, so I could be doing that all along. Me, I made that mistake of saying like, well, I might refactor the API a bit, so why would I write the documentation for this if it's just gonna change? But you know what, who cares? You wrote the code, you're gonna change that, change documentation too. YAML, this is the thing I hate the most. I really wish I hadn't chosen YAML, it's a terrible config format. Initially, actually I was using INI style files, but those are not, there's no standard for that and they're not really flexible. I'm kinda stuck with YAML, honestly, that's established, it's in diskit now with RPM and spec YAML files, so I do regret that. And then I had this idea as well for profiles in the config, which really is just too complicated for people to understand. I was one of those like, oh, and I could also do this moment, so it's there, but no one really uses it. But that's just unnecessary code. I did mention INI style config files and yeah, I changed that to YAML and yeah, that was the only thing, nothing else was wrong. Okay, so things that I still need to work on, yes. Command line option handling. If you're gonna make a command line program, try to put some thought into the command line option handling, don't just keep adding them. It's easy for that to get out of hand, look at TAR, for instance, or now GNU LS, which is nearly impossible to remember options for that. I could have better debugging output and logging and the dash K option, so RPM and spec will go and fetch builds out of Koji and all the RPMs, dash K option was to keep the builds, but the way that's implemented is kind of backwards from whatever user was expecting. But to change that now I need to do, I need to do some better marketing and communication before I just drop that change on everyone. So the fix itself is easy, but I need to think about the communication around that. So that's why I have not done that yet. Successful collaboration. So yeah, Anarchek. So Anarchek is a command, if you've used it all from the Anabem project, it's actually really kind of cool, but it was just a command line tool and Nick had not implemented a library and I said it would really be useful if I had this as a library and he said, oh, I can do that real quick. That's super easy. So over a weekend, he just turned it into a library, but it needed a lot of work. So the API was a bit haphazard and lacking and I said, you don't have this, you don't have this, you redefined zero here and all this and it was kind of funny but he was really receptive to that. So now Anarchek has a library, which is really kind of cool. Lib Abigail does have a library, but it's written in C++ and I, yeah, so there's problems there, but it was nice that this collaboration with Anarchek was well received. People also started submitting lots of GitHub actions for CI jobs. So if you go and look at the CI jobs or our human aspect, it covers many distributions all the Fedoras, CentOS, CentOS Stream, RHEL is in there, Dabian, Ubuntu, Suza, Alpine, Arch, Gen2, Allmalenix and I added free BSD just for fun. I was like, can we do that? You know, I tried in that BSD and failed at that and Mac OS 10, I haven't gotten that working, but this is just, you know, people saw that it was possible to start doing that and have been contributing things for that. So more coverage is always welcome. And also a project should be fun. And I had one developer submit a pull request that really fixed up the parser, refactored that a whole lot, which I hated the YAML parsing code that I had. And so he extended it to support JSON and DSON. And if you're familiar with DogeScript, DSON is the JSON equivalent for that. So our human aspect does support that. And we get config files that look like this now, which is really kind of fun. And this does work. You can add this and just get, this is from GrubTube, by the way. So I just really, I don't know, I just think it's fun. Like someone's gonna come along and read this and think, what is going on? But yeah, this is just the kind of fun stuff you can do with contributions when you communicate it out and stuff like that. So I kind of ran through everything really quickly because I wanted to leave time for questions because, you know, questions are always fun. So does anyone have any questions? Please have some questions. You mentioned various distros. Yeah, are they actually using RPM Inspect? I'm sorry, who? You said all these CI jobs for different distros, but are they actually using RPM Inspect for some of the processes? No, so those projects, to my knowledge, are not using RPM Inspect. They were just added to see if we could run RPM Inspect in those environments. And if it would pass the test suite. Yeah, to my knowledge, no one, no one other than Fedora, CentOS and Rel are using RPM Inspect right now. So as one who had to use RPM Div in the past, the one thing that RPM Div gave you is the interactive ability to react to the results, which is missing in RPM Inspect because of the nature of RPM Inspect being basically a batch run. Do you foresee any improvements in RPM Inspect other than basically disabling individual checks? So when you talk about the RPM Div reactions, do you mean like the ability to wave a result? Right, yeah, so good question. So the intent with RPM Inspect is to handle those wave actions that you would do in RPM Div through a local project configuration file. So yes, in this example here, you can see entire inspections are turned off because they don't really apply to Grubb too. But in an instance where the control given to you in the config file is insufficient, I wanna treat that as a bug and say, okay, we need more fine-grained control for the rules in RPM Inspect config files. Now in some cases, that wave action actually needs to be handled through the vendor data package with a global rule. And that is kind of a thing that's not totally obvious and I wanna improve how that at least is documented and communicated. But that's actually the intent through RPM Inspect is you see the result that way, you modify the config file and then it'll run again. There's no plan right now to implement like reaction interface type thing where you wave it that way. That hasn't been brought up. Yeah, perfect. Well, thank you very much. We appreciate it. Thank you, everyone. Thanks for coming to my talk and I hope to see you around. The next talks are available and starting right now. So if you are switching rooms or anywhere, but we'll get started here in just a few moments. So you all here or continue to be here for the WeBuild on the success of Fedora Next track. Up next here, we are going to do a round table discussion of packaging issues for modern language ecosystems. And I'm happy today to present Jens Peterson who is an engineering manager at Red Hat. So I'll hand it over to you. Thank you. Yeah, welcome. So actually, the idea was that this should be kind of more like a round table kind of discussion. I guess we'll have to see how that works out. So yeah, so this is an interesting talk in a sense that yeah, in a sense that it depends a lot on you what work comes out of the talk. So I don't have all the questions or all the answers or anything. I'm just, I prepared a few slides to kind of set some context. So I hope helps. Yeah, so yeah, also the title is probably a little bit off. I mean, it's not, the scope isn't just packaging issues or sort of it's more about how sigs operate and workflows and processes and well, packaging issues. Right, so I'll use this. Yeah, so I just wrote down these sort of, these are very rough numbers. They're not, you should take them over grain of salt. But they're just, I just donated these with this Pagura tool. So it's just roughly sort of rough sizes of some of the different language ecosystems packaged in Fedora. And they're probably a bit off. I suspect they're probably more Python than Pro, but anyway, just by this, I think some of the Python packages are a bit inconsistently named. But anyway, you can see that, yeah, at the top are like, Pro and Python and then there's Rust and Rulang. And then it's sort of, yeah, there's some PHP and Haskell and Ruby. And so this is fine. I mean, I guess the thing is all these numbers look very big in a sense. Some of these ecosystems are absolutely huge. So in a sense, we're only capturing a very small fraction of, well, obviously we can't package every single little package in Fedora, but yeah, there's sort of a pretty big gap there in some sense. But maybe this is good enough, at least it's a good starting point. I think we'll come back to this issue a little bit later because yeah, this time, I don't know how to use this. Okay, yeah, okay. But also, you can notice that there's some pretty glaring omissions and at least, for example, the JavaScript is not there. I think we know that, yeah, so a lot of the node packages disappeared a few years back from Fedora for better or worse. Also Java is another very big system which is almost completely absent. I mean, obviously we have Java and Fedora, but not really packages much. But yeah, I'm a bit curious about who's here. So has anyone particularly involved in any language sigs or to a lesser or greater degree or not really? I'm fairly involved into the Golang issue. Okay, great. So I am involved in the last sig, but I used to be more in the past. I kind of, not in the last month, but a bit. Cool, great, thanks. Anyone else? I'm not involved in any of the sigs, at least language sigs, but I do maintain a number of packages written in different languages, and in particular, in JavaScript, because I maintain a couple of Firefox extensions, and I, well, I don't have the time, but I'd love to have a best practices document for packaging JavaScript like we do for Python and Golang and so on. So somebody help me. I guess I wanted to say. I'll just echo that. I'm not involved in any of the language sigs, but we have an infra-sig for infrastructure packages, and I maintain, I don't know, 250 packages or something like that. So any of the kind of solutions or broad tools for languages could very well apply to that sort of thing, and that's what I'm hoping for. You know, just better ways to maintain stuff or handle it. Absolutely, yeah. Since everyone is present, please. I'm here since I recently tried to package my first Golang package and realize the complexities, and it got sidetracked, so it's on the height, but I'll get back to it. Yeah, and I'm pretty involved in the Haskell sig, well, it's a very small sig. It's mostly me and one or two people, but, all right. Let's keep moving, because I wanted to sort of get more into the discussions. So yeah, I just noted down a few of the current changes for Fedora 39. I think that's pretty more things happening, but these are the changes I could see, like Perl and Python. I think we all saw the big sort of Perl rebuild for 3.12, which was, I guess it went reasonably well from my distant perspective, and yeah, does this change to remove all the Golang leafs? Which, I don't know, does anyone know anything about that? And yeah, I have a change for Haskell, which is getting ready, yeah. I don't know, I just put this slide in, because I thought it was kind of interesting just to think about the priorities of sigs and how they fit in with the Fedora values, like, I mean, one thing I really need in the Haskell sig is more people, so friends, and I've struggled with this over time getting, we have a real lack of manpower, so yeah, it's really hard to get package reviews done even, because anyway, maybe some of the new ideas, like the review swaps, like on discourse or something, may help. Then there's the first, yeah, and I think like Fedora, I was one of the first distros to adopt Python 3.12, for example, yeah, and Freedom, maybe Titan were licensing, and yeah, so, all right. Does anyone have any comments on this? I don't know. Oh yeah. Maybe working on compatibility with Python 3.12, and like for three different patches, I had replies from upstream, but where did you get Python 3.12 and NumPy working with it? It doesn't work with NumPy 3.12 yet. Yeah, I mean, this is a little interlude. I wanted to sort of probe people, what kind of problems they're seeing, or what kind of pain points, things to think about. So from the Go perspective, I see a very big process issue or tooling issue, whatever you want to put it, because the problem is that we have roughly 2,000 packages for 200 application that we really care about, and then 1.8,000 packages just because of how RPM works and how we decided to package RPM packages with Fedora, which would mean that if we can change that process or change the tooling to allow a different process, we can immediately have one-tenth of the packages and therefore way more manpower per package. So in our case, I think it's more a process or a tooling issue, more than a manpower issue. So one way to approach this would be to automatize the management of packages, right? I keep the packages as a step and build on top of this. Does this sound feasible? So we do have Go to RPM, which really helps in creating RPM packages. There are a couple of issues though. The first one is that even with Go to RPM, we don't have a perfect spec file. So it's a very good starting point. Don't get me wrong on this, but it's not perfect. So it's not something you can completely automate. You should still doing the reviews. Well, you should first reread your spec file, fix eventual issues, apply for the review, do the review and everything else. And the other big problem is that due to Go and Rust have this fact that basically they create a static binary. So we only use 90% of the packages just for the sources, not for any intermediate things, which means that if we change one source package, one of those library packages, we don't have any real effect on the binaries that have already been built. So we should re-kick builds and builds and builds, which we don't do, let's be frank, which means that a lot of times we do have bugs that should be fixed but are not fixed in our binaries because we have not done the rebuild. The big problem is that we have some packages that we have 2,000 packages depending on those. And if we have a new version of the library, of the core library and we take 2,000 packages every few days, I don't think Kozhi would be happy with that. But aside from that, there is also a discoverability issue and that kind of things, as well as conflicting library versions. So a lot of times we end up having multiple source packages for the same library, for different versions, just because we have applications that we care about, are the only things that we really care about that are depending on different versions. Kozhi, we can work around this and we have done this for the last, I don't know, five, six years. But I don't think it's a very good way of handling this kind of thing. And I think that if we are able to change processes and all tools, then we can be way more efficient at doing this. Yeah, that's interesting. Maybe I could, this is probably a bit special, but in Haskell, we have a kind of a good situation where we actually have a kind of upstream source distribution called StackHitch. So what I'm doing in Fedora is basically just pulling down packages from StackHitch and so there's a unique, so we only ship one version of a library basically. And all those packages are kind of supposed to be compatible with each other. Well, there are some exceptions. There are some packages in Fedora which are not in StackHitch, but largely that works fairly well. But yeah, I also don't know of any other language ecosystems which have such a distribution. Maybe there are some, but yeah. Lattec, I mean, it's not a language, but they do have a big distribution that we can repagage the whole thing. So yeah, but I think that the interesting part is that some of those languages have some kind of issues, other have very different kind of issues. So for instance, Python have issues because obviously there are a billion Python packages, but the good part for them is that since everything is on runtime compiled or at least interpreted, as long as, I mean, I think the current model kind of works for Python, maybe better tooling can help with a bunch of things. Obviously they do have problems with supporting multiple version of Python and other kind of things like this, but others like Go and Rust have very specific issues due to their static nature rather than the others that have other kind of issues. Right, you mentioned the need to rebuild, like you need to rebuild your binaries to get like updated library fixes in. Yeah, so the issue is that in Go, for instance, let's say you have a binary that depends on 10 libraries. It will be statically compiled so that basically you have one binary that at runtime depends on zero library, if not libc and a couple of very basic libraries. So that means that those libraries, even though maybe in RPM packages are splitted in different packages because we package every library in a different RPM package, in the reality, in the binary RPM packages, we only have one binary file in the leaf package. All the other packages are there just to make Koji or the builder happy. They are not there for the user. Right, but yeah. But I mean like you can sort of go to the extreme like the Rust has done in Fedora where they only ship source, in a sense source, all the libraries are only available sources and then you can build something using those sources. I mean it's some, yeah, maybe it's pragmatic, I guess. I don't know, I can't say I like it, but my dream is that users should actually be using these packages but maybe it's unrealistic dream, I don't know, these days. Yeah. So if we comment on that, that for users those packages are completely useless. They are the only useful for building other packages. So I mean, it's a complex problem but I wanted to make a comment earlier about the stuff that you mentioned that the initially generated spec file needs adjustments and for Rust, I think we're very close to having like 99% of packages generated either ideally or generated in a way where the changes made by the package after the fact can be propagated. And I mean, either for an explicit patch or for metadata that gets applied when the generation happens. So it's like you apply some switches when generating the spec file and those switches are saved to a config file and then this is in this git and when you regenerate the spec file from scratch, you don't do anything new. And I think that this is a good model because this allows automation to happen. And I can kind of imagine a situation where if this is automated and this can happen fully automatically, we could, for example, have poor request in this git in Pazure that do the whole thing and then it's a small step to automate or to allow anybody to do rebuilds in some fashion that doesn't require a proven package of privileges because I think that part of the problem is that we have, I mean, doing stuff in Fedora if you're not a proven package in those ecosystems is just impossible. In other ecosystems, it's okay but there you just need to write to a hundred packages at any given time and yeah. And we could solve this. I mean, I'm not sure if this is the solution but we could adjust our permission models to allow this. So the way we solved the permission problem was with creating the goal and sig assigning but we have also amended well proposed and then it got accepted a rule so that every goal and package has to have the goal and sig as a committer. So we are working around this but my frustration with this is that as you were saying, all those source packages have zero value for the users. Between Rust and Go, we are shipping, I don't know, three, 4,000 packages that have zero value for the user. We are just cluttering repos, the metadata of the repos, everything else, just because we want to apply a process that does not fit for these kind of things or like the permission, the goal and sig permission on everything, which basically is a proven package because if you have access to, I don't know, 10% of the whole repository is basically proven package level, which by the way is granted just adding a comment, being a package and adding a comment into a ticket. You get straight away access to a couple thousand packages which per is not ideal either but all those are work around because the system does not apply for those packages. So that's why I'm saying that I think that we should try to think also a different process that would apply for those languages because at the moment is Go and Rust but I foresee in the future many other languages having very similar issues because I think that if we decide ways or if we have tools to introspect packages so that we can introspect packages after the fact in a way that we don't have to have all those source packages but only the leaf ones, then everything becomes easier and another issue that I see is that, let's say a new contributor wants to package whatever interesting tool they are using they might discover that they need to package 50 packages and have 50 reviews which then becomes a huge burden on the reviewers obviously it's way easier to review Go and things than maybe other kind of packages but still it's a lot of work just because we want to apply a process that does not apply. So I think that we should really think through the process and see if we can just do binaries with vendor stuff basically because that would be I think the optimal situation and then have tools to then be able to discover those vendor libraries and then do for instance free deals and that kind of things based on those metadata. So I think that in the case of the 50 dependencies that you mentioned, there's two parts to the review of the dependencies, right? One is like the mechanistic packaging of dependencies so that they get dropped into the build route so that you can then use them and I think it's like the most visible part but there is also the review of licensing and I don't know just a general review of the stuff that happens and so the second part is actually useful and I think that we want to keep it. The first part is just a technical detail that we could get rid of. So I think that the question needs to be how can we keep the quality control over the dependencies that we have right now without this extra process that is complicating life for people and I mean I think that we shouldn't concentrate on the package part because with automation this could be simplified quite a bit. Like I can imagine a script where you're in the go lang sig and you do some invocation and just 50 different packages in a way that you can review and push at once and we could make this happen. I think it would be important to figure out how do we deal with the licensing issues and the introspection of the dependencies if we change the process. So at least in go defining and understanding the license it's deterministic in a sense that for instance the go lang documentation gives you the license of every package you look the documentation for. So effectively there are ways to extract this kind of information and I totally agree with your point there is value in the process but I think that we can also get the value outside the process. So for instance let's say that we add one step into the CI CD pipeline of go lang packages adding one step that checks all the dependencies of all the libraries that gets vendor team and if those are within a certain list of acceptable licenses then it gets shipped otherwise it gets blocked. I'm thinking something like this and it can be I guess that in Rust you do have ways to discover a license because I guess that your Rust to RPM or whatever it's called also does the same thing as well. So effectively you could have a different step from the go lang one because obviously you would have a different way to discover the license but still be able to use the same idea behind it but with different steps for different languages. Yeah but I think I kind of agree with so many of it. Having purely automating the licensing is a little license checking is a bit tricky. I don't know, I don't know maybe it's yeah I mean I agree with you for many purposes maybe it would work but there are often cases where there are mistakes in packages like the wrong license tag has been put in a package or things like that so yeah it's a little bit on thin ice I think if it's completely automated but I don't know yeah I mean it might be one something that could be explored. Could you disable the screen lock? Yeah also I just wanted to add a couple I mean I agree that in some sense all these libraries packages are kind of useless but I mean in a sense not what users don't care about it but that still makes me sad in a way because I feel like as a distro we should be providing binaries so because there's actually a lot of wasted I mean in terms of yeah like in terms of I mean global warming and so on there's so much wastage of rebuilding and rebuilding and rebuilding binaries so for me I mean there's things like Nix and so on and Cushix where I mean there are caches of binaries and so on so I don't know I feel ideally we should actually be making those binaries useful so users would use them I mean that would be the ideal maybe it's ambitious or unrealistic I don't know but that would be my desire or actually I had meaningful binaries users could use yeah like if I build something in Haskell using the libraries which are packaged in Haskell I can build something really fast whereas if I build it with like the upstream tools or whatever then it takes much longer to build them so yeah you look puzzled why doesn't it apply? It does not apply to go at least and I believe Soras is the same way because due to how the Go compiler works it will always try to compile from sources you cannot pre-shape pre-build artifacts that it will be used it will always start from all the sources of your application and all dependencies and dependencies of dependencies and so forth and so on and because doing a big bank compilation it will do optimization throughout code parts and that can things and exclude all the part of the libraries that will not be hated by your application and so on so effectively due to how the compiler works what you are describing does not apply to go now we can argue on the dynamic of Go and the compiler itself but that is how the language works so other we fork the Go compiler which I don't think we want to do or we accept that Go does not work that way I don't know, yeah I'm not so experienced with Go or Rust but even Rust doesn't cache builds locally or like no, no it's exactly like this you have sources and you build from scratch doing optimization of the whole thing at once essentially link time component time and link time optimization because in Haskell there's two tools Cabal and Stack and both of them cache basically cache build cache so if you build some library and then you build it again it will use the same binaries to link two separate packages I mean it can do so anyway let me see if I can move on yeah well this is kind of what we've already discussed in some sense but there's both, there's various issues about okay this was actually a different slide but yeah I mean most users would tend to like often like use upstream upstream like they might use the upstream binaries even for Rust or Cargo and so on so I don't know, I kind of see this as well I don't know, I mean that's I don't know from a distro point of view it seems a bit problematic I mean it's sort of the next logical step if you're resigned to not providing binaries then why even bother using the distro compiler then I mean so I don't know I still feel it's sort of a slippery slope in some sense yeah because we want to have some things packaged in Fedora so maybe we have to do the minimal work or well streamlining, I agree with you completely that streamlining the processes and maybe if we could even get it down to just like a license check or something like that more or less or that would be great yeah because at the moment it's still quite a big hurdle to get new packages in any thoughts on this? so I mean if users really prefer the upstream binaries then this is probably because they were better for the users and I think that if we are providing binaries which are you know like we think that they are good but actually they don't have the I know like for example they don't have certain features enabled because we haven't packaged some dependency then it's not a I mean it doesn't benefit anybody it's the users are getting the worst experience if they use the package I mean for me I when I can use a package it's great because first of all I have a reliable delivery method second I have a reliable cleanup method and I get updates and I mean like yes it makes sense to do packages when the packages are at least as good as the upstream stuff so in particular for us if we do the whole process correctly the package the code delivered by distribution is going to be exactly the same as the upstream one all right because it's the same compiler the same sources it's a bit different in like traditional systems where I can see where you have compilation flags and link flags and maybe some patches and a different version of the compiler and this all means that if by the end when you get to a end user program that links to 200 packages the way that you build each of those packages matters and then the result can be quite quite different here for I think that in particular for us you just end up with something that's maybe not binary identical but functionally should be exactly the same but I'm not sure I was clear I think I haven't written it very clear what I meant is that people are using the upstream toolchain not the not the end packages I mean that in a sense yeah but so you're using the upstream toolchain and like so let's say that you are a user and you get a new fedora and then you like okay now I want to use this program I have to install cargo and do cargo compile this and then a week later I have to remember to update it then this is a terrible user experience right the good user experiences that every few days you click update and then everything updates and you want to have the same thing for me this is the value provided by the distribution and I think that that's what we should try to deliver right like you make stuff compiled in the well in the way that apps would compile it or maybe slightly better just nicely delivered as packages so that you get automatism we should not miss a distinction in between your go and rest and for example other systems where we have the where you include things like python where people do a root pip install of something and break dnf is a prime example where it goes wrong because you actually use the dependencies on the system and can break one program because you want to install another program which is not the case when you do it statically linking so it gives different aspects of this particular problem that's my but I think it's actually a good example because we had this issue that users were doing pip install and breaking their systems and we actually fix it at the root right because now pip install does not break the the system and we change the way that we do things so that it's nicer for the users to use the upstream packaging if they want to and I think that we need to do the same in other cases yeah I totally agree and if we pick go for instance the fedora 39 change where we dropped a bunch of leaf packages those were packages the master tag goal and leaves basically those were all the packages that were source packages and were not strictly required to compile our binaries basically we are saying we don't care about the users because the users will not care about all those packages the way we the reason why we have 2000 packages go along packages within fedora is just to have 200 binaries that's the only thing we care we care about kubernetes we care about the tcd go pass all the others we do not care about go lang dash google dash x dash c's because the reality is that the go and rust is the same is like those new languages are thought in a very different ways than c and the others were thought where basically it was like oh we now have a compiler we now have a standard library we now have stuff that's give to the user the the ownership of putting everything together and in that world distributions were great because they saw the issue for the user we were able as a distribution to help the user in those new tools were basically the compiler also downloads all the dependencies automatically and compiles them for you is a the distribution has no space there right so we either work with them and change the way they work or we adopt to and accept the fact that users will not care about those packages what you said of course is true but i would not agree that uh we shouldn't care uh about the all the library packages that are our dependencies or well that are dependencies for the few hundred packages that we actually care about because that's what people use uh i still think there is uh there are some there's some added value that the distribution can give uh for example uh you know when with automation this this this becomes tricky this should be some gating still yeah that's that's one of the things we've got gating so any updates that break other stuff should get caught okay but uh you know uh when there is an upstream update and uh actually i'm not that familiar with the golang or rust ecosystems but uh i know there is you can pin dependencies onto a particular version but is does it always happen or can you just say uh i depend on version 1.5 up to whatever but not newer than 2.0 for example right yeah you you can specify them exactly but also with the range right yeah sorry yeah sorry uh so uh in go um you have the specific version pinned like 1.7.4-1 um and then we do a little bit of uh trickery to make it work with slightly different versions uh otherwise everything would break um but due to the way it would work upstream outside fedora it would be with very specific versions pinned yeah so so what we're doing in fedora is actually a bit different than what upstream is doing but actually upstream benefits from what we are doing because i think they do because they they know that if we encounter problems they they know that uh things will break when they do an update right so they they need to react uh i know you want to you want to reply but uh another benefit is that um yeah okay i'll let you off thank you uh so yes and no first because uh due to how the the compilation works upstream let's say hugo for instance uh hugos written go they deliver a binary they will only upstream will only support issues on their binary and if you are go there and say oh i have this specific issue and they are like well it does not apply to the binary with the right version of that library so if you are on a wrong library that's your problem and they do have all their c i c d for exactly those very tight versions and we are trying to lose in the system where it's very tight upstream and everything works we lose it we break stuff and it's our problem now and the second issue is that often we are not working we don't have libraries that are version higher than upstream upstream at least in go a lot of upstreams simply one time every week they update every single library they have so they are way more bleeding edge than we are so we are just lagging behind we are doing a huge amount of work for i would argue very limited benefits if it was zero cost okay yeah whatever who cares but since it has an impact it has a cost it has a ton tons of hours of contributors that are gets then demotivated from this do we really want to have this okay so you're obviously working with different upstreams than i am because well i maintain a very limited set of go long packages but when i get notifications from our release monitoring dot org and i check what the changes were i don't always see you know the the dependencies updated they're usually pinned to whatever version they were at when they were added for a very long time so i think that's probably the disconnect between what you're seeing and what i am seeing and what what i think fedora value is here so yeah maybe i'll continue a bit and we can see there's only 10 minutes left there's a few other topics i wanted to cover i don't know in the context of our current discussion i'm not sure how well it fits in but one one is about package workflow we talked a bit about the the high barrier to entry of packages so i'm wondering what would it take to allow to sort of really streamline this process of introducing new dependencies i don't know what because i would require some significant changes to package review process or i mean maybe it should be done in a sort of six specific way because i don't know i don't think we i'm not sure we can open up the floodgates to any package just being kind of just coming into fedora just on basis of license so maybe six would have to be involved in some kind of process around that i think that surely we can delegate stuff to six that could be an idea though there are many six that will have the same issues so personally the way i think it would be best it would be that if the fedora project says look we have analyzed multiple options and we have seen that there are three possible let's say three or one i mean two three four five whatever number of possible models that can apply usick can choose which one of those flows better fits your model so that we don't have 20 different six that everyone does different things or slightly different things but still we do have a little bit more freedom on a sick perspective another thing that i think we should really fix is that s6 should be able to be the owner of the package not individual contributor not only individual contributors so i think that essentially we are using the this data as a list of allowed dependencies for for for packages that are compiled from source including all dependencies right so so for us in golang and i think that we could switch to to a model where the same information is kept in a different way i i think it would require like a discussion of of how to do it but essentially i can imagine some model where we have a a list of we don't actually package the dependencies we just say okay well you have the dependency for and we just allow i don't know either all versions of four or versions of four between this and that and at compilation time the package that specifies that it wants for with a specific version on a specific range gets some some version of the dependency delivered and the compilation happens in exactly the same way it happens right now because right you get some version of the dependency delivered and you compare that so if we do this there are some different mechanism then through the through this git then i think that many things will become simpler in in particular like pruning obsolete packages could mean that we don't prune stuff they just stop being used and they don't bother anybody because there is that's one thing and this also solves the problem of different packages requiring slightly different versions or even not slightly but majorly different versions of the dependencies it's i mean it would really simplify the life of those ecosystems if you could just use what the upstream says by default maybe allow overriding this very good ideas yeah i like i like the way you are going with this yeah i mean if we really can have something of this in the future that would be pretty exciting and that would open a lot of possibilities i think all right i think we're running a bit short on time but another topic i thought i wanted to touch well i don't know i'm not sure if this is a good topic but about rpm macros and i don't know it's pretty hard to change rpm now because it's so obnoxious in our operating system but i don't know i also feel that like the rpm macro language is pretty awful in many ways but i guess it's sort of worse is better kind of thing i don't know i kind of wish i was a more modern declarative language which yeah but i guess i'm dreaming but maybe it's not really a well i don't know i don't really need this to to move forward i think what we're just talking about now is probably the biggest problem that needs to be solved but yeah i don't know the things about like tooling and automation and yeah i was sort of hoping we could i have some knowledge sharing of different tooling and automation around packaging but we're also running short of time so i don't know anyone has anything they want to yeah i know some i think go lang is using dynamic build requires which is interesting yeah yes we are using both spec generator dynamic build requires but i really feel like we are trying to patch something just to make it work dynamic build requires like i'm not saying that invalidate everything because they don't but it's such a work around around a process that is very static is like the rpm process is very static by nature and to make it kind of workable then we put dynamic stuff into it so that it becomes kind of acceptable and is a yes it's true but we have changed the nature of that process itself so at this point i think that we should really think about what was proposed as different discrete builds or that kind of thing so that basically it just flows the part that we really care about yeah it does feel like a bit of a hack i don't know but yeah i'm not gonna say it works but yeah i just managed to share my personal plugs here but yeah i mean the other thing i noticed is like misalignment or between like the upstream and distro well i guess we've sort of talked about it a bit but for example i think a Haskell is surprisingly well matched maybe because some of the packaging sort of disturbed that packaging people were involved in the packaging system design originally so it kind of maps pretty well whereas i don't know maybe some other languages it seems more tricky i don't know maybe python is almost the worst in some ways i don't know i'm not sure but yeah anyway we should probably start wrapping up yeah i had a few other notes i made here but yeah i think someone brought up this idea about cascading rebuilds like automatic rebuilding i guess yeah i think that's what nix sort of does more or less but another thing is that i'm seeing a lot of new languages which almost can't be packaged because they use such weird packaging it's a real problem i feel like it seems like a lot of new new projects don't really have this idea of having being packaged into a distro it's like an afterthought or something which makes it a bit sad too but yeah yeah one other thing that's interesting i think is this cross distro collaboration i mean we actually had in Haskell we had some collaboration with open susan which has been useful actually um we used to share a bit more touring now they've slightly diverged um they're actually more bleeding edge yeah so i think that's more nice what i was going to cover but if anyone has any last ideas or thoughts or other things that we should think about in the future i don't know so i think that uh i mean we didn't have this at all but i think that we need to reinvigorate the packaging guidelines and package your documentation on the wiki and in the docs because and there's some parts that are being updated regularly but many parts are just full of obsolete stuff you had f branch on one of the previous slides and like if you if you're a new packageer i mean how would you find out about those tools right yeah i'm really good about publicizing it yeah you know and i i i'm not sure why this has happened but i don't know like we should really put work into into updating the docs to just have the current stuff and get rid of the the old stuff or put it on on the side somewhere where it doesn't confuse new packages hello there can i just add something on that a good way to get engagement in the community on that if you can spot problems that need to be updated can you create a ticket and add it as a good first bug or something because it might encourage people to that don't know anything about this to jump in and try fix it well i i mean you don't need to do that you open any page and you start reading it and then you see like okay this is formatting correctly this is i mean like if you if you do packaging you on essentially on any page you will have stuff that is i mean like i could open i don't know a hundred tickets if i wanted to i don't think it would make sense i mean i would just overwhelm the pipeline also i do believe that the rpm packaging guide is actually approved by fesco for the changes so i'm not entirely sure that that would be a good first change for someone i mean if it's like a documentation that is easy then to to get your change merged then okay but if there's stuff that fesco then i have to wait in fpc okay but still all right yeah well thank you very much it was a good good discussion enjoyed the session thanks for coming and break for lunch so if you guys want to join there's a lunch room just down the hall so if you go right out into the left and then the talks resume i believe at 130 so thank you all very much okay i'm david duncan and i work with all a lot of other people on the cloud distribution or addition and the and the the focus of the cloud edition has been one of those things that in the past was a was kind of slipping behind because we had atomic i have the jacket over here we're atomic for a long time and then we thought that os tree would just really kind of just replace everything and i think you know our our five-year plan includes the immutable os as as a big part of that cloud initiative obviously that's the foundation for open shift there's no way to get around that that said i talk i work for amazon if you didn't know and my role is to talk to customers who are using partner linux and linux based solutions on top of cloud cloud architecture and so i hear about what's going on what their what their pain points are what the things are that they they want to have at their at their fingertips beyond just the their modernization models right though a lot of them have solutions like sap that run for you know upwards of 30 years and their their expectation is that they'll have a consistent experience for the duration of that time modernization doesn't really we don't we don't talk about modernization in terms of decades right not at this point we talk about servers as a as a you know in terms of decades and we're just you know we're just starting to talk about us our our cloud solutions in ways that operate predictably and so i i pride myself and and i pride the work that my team does on building solid support for for linux and linux based configurations on top of the cloud right um or as peter would say maybe somebody else's computer the the the fun thing about that is that i have learned a lot of the paradigms and learning a lot of those paradigms made it just a made me a ready fit for the fedora experience and the cloud as it was and the cloud team as dusty maib was turning his focus towards fedora coro s after the acquisition of coro s obviously all bets were off in terms of atomic atomic was was retired and and that cloud experience started to accelerate around the coro s experience and i have a great relationship with coro s i love i love to work with the team and i also enjoy running it for specific solutions one of my favorites to use coro s4 is running agent-based step functions on on in cloud environments and that means you can create an environment that is basically throw away you know just arrives for minutes produces the artifact and then and then is destroyed as a result of some sort of state function very much enjoy that but today i'm here to tell you that we put a lot of time and effort into reigniting cloud as as an addition and as a as a community this was a really um unifying experience it also brought us to a place where we recognized that there were some different goals around cloud than there ever had been before one of the things that we talked about a lot was um was the the concepts around what is a cloud image right if it's not just exactly the same thing as as a as a serve you know as the server image or maybe the work station image then what really is it and some of those questions are answered in the way that we leverage the the cloud configuration or the the images themselves because they become extremely versatile we don't just create a raw disk file we create a solution that is kind of a minimalized version where we know that we're using utilities that are consistent with the expectations that customers have or users have really um users have um in their uh just in in the breadth of their experience right so it doesn't boot differently than other you know than on one uh one um platform as it does another the expectations are you know we try to maintain those as consistent now um we are also trying to work with with uh to increase the documentation and that's another thing that you know i peers in the room and i'm kind of excited about it because we because like i have all these initiatives that that probably land in things that he's governing at this point uh around cloud and one of the things we we need desperately is we need help with our documentation and and integrating with the team to to do better documentation i know you are i had i had someone who committed to it but then uh he is a volunteer and so we we work on his time you know as as he as he's available and then uh give him as much help as we possibly can excellent i'm grateful so so uh that also puts us in touch with the websites group uh infrastructure on a fairly frequent basis and it gives us a lot of initiatives around the other around the cloud providers um for me you know the goal with fedora is to extend that uh functionality across any provider that we're that is willing to work with us right and right now we're putting a lot of time and effort into focusing on uh azure and ensuring that azure works um that has been kind of an up and down kind of experience but in the last uh in our last release we've had to make a lot of security modifications so if you haven't noticed a several of our change proposals that have gone through have been around like disabling support for uh uh non-tokenized communications with the metadata services uh ensuring that we have support for faster network interfaces and ensuring that we're um we're getting the right kind of support for uh for that in the um in the configuration of the image creation that said our image creation is done using a terrible tool i'm okay a wonderful a tool that was wonderful in the time that it was heavily maintained but we have other we have other options now and uh and i think that um as we look to the future we will see the um we'll we'll start to use more ansible rather than um rather than using um additional code development for our dev ops we decided that we would try to build our own collection so we're sort of uh working through building a collection that that will um that will support our requirements for upload into oracle into azure into um ac2 and and into and into gcp and we find that you know that that's really where we want to be um so so when you think about where cloud belongs cloud kind of extends out into the experience of the vagrant images we have uh um an expectation that we're going to extend that cloud image into the wsl uh the work that we've done around you leveraging kiwi in the context of koji has given us the option of creating our own wsl image in uh that's consistent with the images that we're creating for the other cloud for the cloud providers that that will give us the option of kind of refilling that space there are other people who are doing a great job of of re-rolling the cloud the cloud packages and then placing that in into uh into the wsl uh directory but we'd like to have our own so uh i guess here we have a packaging strategy that we're just forming it's it's uh and that is similar to the way that the neuro fedora team has has settled into their packaging model we want to have uh several packages that are associated with things that are um that are central to the cloud providers all sort of centrally located inside of the cloud team and the reason we want to do this is because we don't want any one person to have to be responsible for the packaging model or you know enduring you know making things work all by themselves we'd like to have this be a collective where the those those packages that are necessary for say uh you know the the google compute cli the the aws cli um other other types of tools like the uh cloud development kits that come from from uh these groups cloud shell uh integrations in the desktops we want to make sure that all of those work and function in a way that is consistent and that we have a unified voice in the way that we communicate back with them if you haven't looked at some of those uh those uh cli's or or the um well i mean anybody who's worked with cloud will probably know that they've been bitten by the uh by the experience of having a uh cloud providers uh utilities that are based around uh an earlier version of python like python 2.7 uh in the modern world and and that uh that gives us a lot of pain uh pain points for for trying to integrate and we want to have a unified voice on that other things that we run into is we have uh cloud providers that or or developers who will constrain the requirements so if they're using a python based cli that python based cli may may be leveraging some you know version of pigments or some other package that is consistently lower than what we're we're producing for uh fedora and we have to go back in and relax those requirements i mean this is kind of a common thing in a lot of software but but uh but it's something that we really don't want to do we want to make sure that we have consistency across the board um and i do this with my you know with my um in my professional life i do this across try to do this across the board but then uh here in fedora you know this is the unified voice that i want us to have and make sure that we have that uh as a as a group so that kind of gives you where where it is that we we see our strategy and where we see our position so moving on to that we also see uh the the cloud being very much integral to the workstation experience um obviously a lot of our testing goes on in in a in a server environment uh in or an open qa in ways that is just sort of basic and uh we want more advanced configurations we want to look more at what the boot boot requirements are if something fails in a different in one of our cloud environments we really want to take it you know take it back to uh you know a server that we own or works at workstation that we own and and uh take it through its paces to determine if there's something that we can we can locate there just as easily as we can trying to do debug with no serial console so um and uh i see us as sort of uh in in line with the fedora server model the server model obviously has a lot of things that we don't need so we don't require dns that's a service that's the you know that's expected to be provided you know we would maybe provide some sort of an intermediate uh faster response but we won't we won't provide um the full you know full-fledged full-fledged dns support we don't need dhcp obviously because there is already a dhcp provider there's no way to get an ip you know an ip address on any of the cloud providers without your without having your own dhcp assignment um let's not talk about ipv6 though it's um and so you know we try to keep that image minimal um but that said you know there are lots of things that we can do um the the ide that i put on here and this is this is like a uh an idea that i feel like we we have um we can flesh out more is that uh most every cloud provider has some ide model that they're using that is cloud-based similar to the one or i'm sorry it's uh almost web-based right it's a web-based api or interface like cloud nine and the cloud nine api sits on top of some instance somewhere and today that you know like the the team that's responsible for it they're responsible for they they publish instructions on how to make your own but then they don't actually make make one that is uh associated with fedora so i whole hard to believe that that is something that we want to provide for the for our development engineers who are working in that space who are interested in continuing to work in that space and make sure that they have uh the a segue or entryway into the fedora community in our projects without having to change what they were doing previously uh just just introduce fedora into their workflow um kiwi with koji is one of our big uh our big um uh pass forward and the reason for that so we like os os build and we like the way that it works we love the way that it functions as a service it has very it has a very clear um uh position inside of the community and and our our efforts but right now there are a lot of things about it that are that limit what we can produce including the wsl images including container based images including images that are associated with uh azure in in the way that uh are that's expected and so we want to ease our way into what's being done inside of the fedora community but we also want to introduce kiwi because we think it's a very effective tool it has a great way of layering the configs and we can we can um have a composable configuration that we can break down and not be responsible for everything um and that is one of the reasons that we've chosen it as a as a part of our process um and all of our tools our current tools are in our in maintenance phase uh we're producing architectures for arm and for x8664 and we have uh special friends inside of the red hat teams who are producing s390x uh machine images for us and then reporting back if there's any inconsistency in the way that those are functioning um so these are the architectures that we're we're supporting um i would love i keep uh i keep hounding our uh the internal eyes at ec2 to give us access to the um to the mac instances so that we can uh we can try out this asahi stuff and uh we're working on you know we want to see as many customized images as we possibly can see if there's if there are things that people need in the context of a specific cloud provider we want to make sure that we're capable of producing that and that we have in the context of these kiwi configurations a component a a composable component that will allow us to um make those my whatever minor tweak or major tweak needs to be done for that specific environment so let's say you have an arm64 instance and that arm64 instance requires iommu be disabled right because there's no that that caching layer is not not actually there in the in the arm the arm architecture it's only there in the intel architecture and accessing that cash cash can then add incredible latency to your and context which is to your communications with the with the processor and we want to make sure that that you know those customized configurations are in fact included inside of those the agents that are associated with google compute platform versus the agents that are associated with azure we want to make sure that those are are specifically addressed but then we also want to make sure that you know we have a generic experience that customers can our users can can uh um uh can can take advantage of in their uh in their process as well because we don't want them to lose track again we don't want them to lose track of the things that they've already developed and have to move to something different just because they've moved into you know into uh they've decided to use fedora so that's one of the reasons that we think ignition is a great idea but we don't want to forfeit cloud in it right so um uh the package dashboard integration is something that we're working on right now um a few of our you know a few of the packages that we're working on including in that uh are the hibernation agents for uh the ec2 instances and uh and then the uh the uh windows agent actually includes that same kind of hibernation um that one tends to be something that you wouldn't want to put anywhere else right because they modify the sleep.sh files and and modifying the sleep.sh is kind of a no-no anywhere else like you wouldn't peter's not going to do that on server and and um and so you know we we have to make a conscious decision that this is something that's going to be beneficial outside of the outside of the main you know our main distribution uh objectives but that said like making that decision doesn't mean that we you know we immediately uh leave the you know take the cloud cloud provider off the hook that means that we work with them upstream in the way that fedora is supposed to right which is we we go back to those service teams and say guess what you know you're modifying something that you should be pushing you know that your modifications should be pushed upstream and people should consider these to be consistent with the way that you're you know you're supposed to handle that um so we want to make sure that we have that uh that parity um uh so here's a few things that I am asking for um we need test plan updates like we have lots of things that are not being tested as well as we as we wanted to um we need uh support for our fedora cloud test days uh fedora cloud if you're not familiar with the cloud test days these are wonderful things for us to get into and it's a great place for us to have more automation so some of you guys in here from the ansible group and you may know a little bit more about operation or automation and and automation platforms maybe but uh um we'd love to have your help in ensuring that a lot of those two those are um are uh ready for you you know ready for use and ready for testing and we can get some get a lot of feedback very quickly um butterfs if you don't know is an important part of our our process and this is something that uh separates so you'll see uh you'll see uh the work on enterprise linux next and how that enterprise linux next is kind of deviant from how the fedora cloud image looks right we have this base of butterfs similar to the way that workstation does and one of the things that we wanted to do here was to create an environment where we could really separate that experimentation and provide a lot more uh connection back to um our feedback on things that might be beneficial far in the future or or you know even closer um but the but it gives us an opportunity to do some of that exploration and one of the things that we want to do that is directly related to um to to some of the objectives around uh in the red hat experiences to provide some uh um microkernel um models that uh people can experiment with um we need you know so we need lots of documentation i've got i'm short on time so i'm just going to say that right out loud these are some of the things that i think would be great for us to have as companion guides and and to have as uh as uh for more friendly documentation for each for each one of the cloud providers and these are some of the things that we're doing the cloud nine integration we're looking at doing neuro fedora uh as a as a connected way uh connected image with the nvidia controller drivers already established a lot of the cloud providers already have distribution rights and we can produce images and those images can then have the have the uh associated in nvidia uh controller drivers integrated back in and then we're also working on the workstation model so vdi is another thing that we think is really important we'd like to do some more of the integration with the um uh with the hpc technologies like parallel cluster um we just really want to make things more flexible more agile bring more of this opportunity we have another talk on thursday about uh about building your own images and how we build those images and we really want to take this all the way out right like the this iot experience can definitely be brought all the way back into your cloud experience whether you're running it on top of your server or you're running it on top of uh of a cloud provider so that i'll take any questions you have appreciate your attention there's a lot of attention in the room and i appreciate that grateful any questions well if you find that you have questions i you can find us here i'm here all the whole day and uh i always love to talk about it um um the cloud sig meets every other thursday uh and we meet on an early morning so it's not a very apex friendly time zone you know time time frame uh but uh happy to move that so that we can we can increase the uh the amount of participation if if we need to get 20 seconds left first oh no i was just gonna ask uh so how did amazon take your suggestions on how to fix the ec2 um hibernation agent you know to make it more upstream friendly honestly they took it really well there's a there's a specific engineer who made his first kernel commits because i was collaborating with dave chenner on how the how we could make it work and we had run into a bug and uh it was an nvme bug associated with with xfs and dave and i were talking back and forth and i said you know this guy xiao yi is is working on this this bug that's associated with uh with this can you can you help him get this kernel commit done and he literally you know coached you know we the two of us together coached him on on making the commit uh doing the work inside of the lkml and then dave uh took his pat like there was you know obviously this was his first patch so he took it uh with you know some spacing issues and tabs space mixes which probably wouldn't have happened if he hadn't all they hadn't already had the kind of behind the scenes conversation about what their goals were and and how to attack it but it made him a first time you know kernel committer and that to me was like uh the you know experience of a lifetime to say that you know this is this is ec2 hibernated agent and the reason that this you know xiao yi is working with with confidence on the kernel is because we had this conversation in the context of fedora and we had it with the uh with the um the owners of that of that code uh who also happened to just just just so happened to work for red hat so cool yeah well i know i'm out of time so i'm going to say that i appreciate everyone coming and and uh listening and if you want to participate please let me know and if you want to talk further about uh what we can do uh to make you know to streamline how i interact with the teams that you work on really looking forward to doing that and for years the way that we've done our uploads but image factory really only supports one cloud and if you look at it the code is built around libcloud and libcloud uh if you haven't if you if you don't have any you know if you don't have a cloud experience that goes back to joyant right which are the people who who uh uh brought us npm and no no j s they brought us all of that uh joyant uh was a was a company that uh that was first on the scene and eucalyptus was there and then libcloud was built uh i think it was built by matt garnett matt garnett was probably was part of the libcloud team and and uh matt went on to do the boto three libraries for for uh amazon and he did those independently um and then uh amazon said you know we'll take over the maintenance for that we'll we'll bring bring a whole team to it and uh and that's how the boto three library became boto three and boto core and the aws cli um but the uh but the the reason we wanted to do an ansible collection is because libcloud was one of these things that people use as a result of wanting to do a generic experience right they wanted to create a generic experience around it and libcloud was built around eucalyptus and red hat had an initiative when joyant was nascent and the concepts around eucalyptus were were still called just distributed computing and uh and they uh they had a program called delta cloud which still exists and libcloud is a component part of that that delta cloud initiative the delta cloud all of that is based on 2.7 and of course it fell out of it fell out it went into the apache project and did what things do when they fall into the apache project and uh but they have i mean there's still people who are maintaining it's great uh they're just they're just not making advances on it right so uh and they don't fix a whole lot of the bugs they just fix the ones that are that are critical to whatever it is they're doing and uh we our use of it never got beyond uh the aws configuration so uh we don't have any advanced support there's clearly no support for the oc for oracle you know cloud infrastructure uh and never will be so we know that we have a mission there which is to create that that uh consistency and the way that we thought was best integrated with the with the way we work is is to uh leverage ansible in the same way that infrastructure leverages ansible and that way we would have the ability to just push whatever into uh into that uh ansible playbook and then the ansible playbook can be responsible for their image creation there are things inside of image registration that wouldn't necessarily lend themselves to that in like the red hat images because red hat images have assigned uh billing billing codes and things like that that are kind of hidden behind the scenes and we probably don't want to like they're not they're so not useful they're only useful so like three three actual uh use the users of that you know the the um the image registration so they would never be it would never be a um a prioritized function um but it's something that we can use to build uh very specifically images that are that are leveraging the same kind of components that we have both in the kickstarts and also in the uh uh in the cloud in the cloud utility in the image factory that we use today they integrate really well with koji um makes it makes it's super easy for us to do an event driven architecture once we have a consistent image and that image goes through tests once the the test promotion happens we can automate the the deployment based on that collection the collection itself does some things that are not standard so all the useful guys would kind of be like yeah that's a nice idea but you're still taking a lot of requirements for other collections to make this happen um but i think that i think we have a fairly good foundation for why it is that we're supporting additional collections in there and and leveraging collections that have have some support today right so gcp the google google compute has is a supported configuration for ansible automation and uh and the uh same with the the aws and we have some other things that are coming up in in azure office um so that gets us our primaries and then we can we can leverage uh some basic you know basic commands for some of the smaller providers like more what they call managed clouds right rather than the public clouds the managed clouds where where we want to make sure that we still have images on the managed clouds too and this makes it possible for us to have more of a collaborative experience so if you're looking at what i think is really important for me what's really important is making sure that we have this kind of distributed architecture that's easy to to drive with event driven experiences around the qa teams results uh what else would you like to know about the cloud we have so i'll get more into the cycle i guess um now that i i realize i have more time this is this is going to be more fun actually so one of the things that uh i got it fine i'll do it from here okay i'm going back so uh i'm going back here to this one okay so this is kind of an exciting part for me uh the fedora cloud for ide um the reason that i think that this is this is kind of amazing is it represents a lot of things butterfes being one of them right this is totally deviant from what you would expect a cloud image or i'm sorry an an image to do right like if you if you were pulling together a spin right now you probably wouldn't pull together a cloud nine spin right um and the reason for that uh that um our ability to do that is because we can do a whole lot of this in post right so i can do this in ways that are consistent with whoever has the has the contract right so if i look at what happens in a service team at at amazon they're required to use the ec2 image builder and i can build a document that is just as easily leveraged inside of the ec2 image builder with ansible that as i could one that was used used inside of the ansible automation hub and that way if we have an event driven architecture or requirement for building something that has a you know zero-day exploit we can literally roll that into a golden image pipeline and the golden image pipeline can be associated with like producing all of those images and that kind of is something that's really exciting the the images we uh the the you know so if you're looking at that you can think about it in terms of like characteristics of of other uh service workloads and my goal is to uh become consistent with the requirements that step walters has around software as a service so basically supporting a lot of the work that's being done on top of open shift and okd in in the in the community to identify how we can we can build software as a service and one of the ones that um in fact one of the guys that's on my team at uh at amazon has kind of adopted the concept of doing pagura as a service and and to leverage that is one of our our ways of producing um a a visible support for a an application on top of on top of the uh both the fedora cloud experience and also the um the um the open shift experience we don't want to we don't want to muddy like from a cloud perspective i don't want to muddy the experience that customers you know the users have around open shift and the experience around container based workloads i want to enhance that and then where they have specific techniques or or uh or expertise to ensure that they have uh supplemental models for that and then to provide these things like the the cloud cloud ide where um they would not have normally chosen to go to like eclipse che to do their work right they they would have already you know they would have been tinkering around with amazon linux under the hood and or or a bunch and they think that they think that this is a great fit but they're really looking towards a rel architecture or infrastructure and they want to know what that rel infrastructure is going to look like when they're doing that uh doing that configuration and then we can do things like because we're having we have this package maintenance group we can take the package maintenance for things like the cloud development kits and then take those cloud development kits back into those ide's immediately and then we we can have sort of a point in time uh support model so we know where our support uh our support exists and then our customers can understand our users i'm sorry i keep saying customers because they're not customers they are users and and uh our users are are capable of making decisions uh that qualify their workloads and in ways that they were already familiar with so the strategy is meet them where they where they're living and where they're where their technology is is exists butter fs again i'm i'm i'm jumping around a little bit i'm sorry about that but but the butter fs uh decision we made supports a lot of sin receive models for snapshots and snapshotting and um obviously we could have chosen a much larger file system uh but that's already kind of covered in the and a lot of the a lot of the the larger cloud providers uh like um the efs utilities for for the extended file systems which uses nfs v4 to create parallel parallels uh file storage across multiple availability zones and we can we can take advantage of those efs utilities to just mount that up on on the instances themselves and that's that there's actually in in a lot of the open ship configurations based on some of the stuff that we were doing in in uh experimental work around fedora so uh fedora cloud and and the butter fs we we've been looking at ways to um to modify the so kiwi gives us some other flexibility around around butter fs it makes it possible for us to um to you know leverage um techniques that you would have only gotten uh with lvm as a foundation but lvm is terrible on on you know in the in the context of the public cloud not because it's not a sound uh technology but because the drives themselves are already striped so anything that you're actually using in the nvme drive you're not using one you're using multiples underneath and the lvm structure doesn't add uh any ability like the the increase and decrease of the volume size is already there right you just have to increase and decrease the the file system size and if you need another uh partition that partition can take advantage of another ebs volume and then you don't decrease the i o performance that you have on the volumes that you know where you need it so if you're doing something like a var log or slash var and you're you're you're creating a database and that database is is living inside of that ebs volume you don't have to sacrifice your operating system uh performance to in fact use that one of the things that we can take advantage of with butter fs is we can we can uh use the snapshot send and send and receive the the contents of the current var to the to the new ebs volume with whatever p you know iops of reservations you have in total and then that that can then go back into place on the on the instance right so now i can mount this new ebs volume with all the content that i had in the in the original slash var and that just is a snapshot so there's no there's not uh there's not a um there's not a waiting period there's just make the sand quiesce the database new you know push it push them out and then and then you're done so minimal downtime we can we you know we can take advantage of a lot of those that flexibility inside of the operating system we can shrink the operating system which is one of the other things that we really like um and and is a big deal right because a lot of people i mean i do this i'm totally guilty of this i will stand up an instance build sentos or images for for marketplace and then i will shrink the volume and and uh that means that you know i can have i can have a 60 gig volume for a few minutes while i create my artifacts and then i can decrease it down to 20 make a snapshot that army is is exactly what i need for it has it has all the content that i need to just make an improvement to the to the base disk and then shoot that back up to the uh to the marketplace it is now yeah so um oh okay so we're confused here well i mean we work so i i work with the amazon linux team as well so uh when we first so long time ago this was uh this was um the idea of amazon linux uh what at that point was 2022 and then it became 20 it slipped um was envisioned by max uh former fedora project lead and max wrote the documents um created the the vision you know for what this would look like and how we would do the implementation and uh then he got another really great opportunity he's working here now and and uh uh and so uh what are the things that but he left us that legacy right of what of how amazon linux could be associated with the upstream experience uh in fedora and originally the goal was to branch at f35 and to use f35 as a as a as a foundation for the first version of amazon linux 2022 and the concepts there are twofold so if you are you familiar with amazon linux and so this is a great yeah this is a story that will make make peter's hair curl a little bit this is amazon linux was originally created um because they wanted to excel the senior senior leadership wanted to accelerate their uh support for new hardware um and then originally they built on real five right so if you're familiar with that real four and real five or the original foundation for that that whole cloud and they met matt wilson and christian gaffton and a bunch of other guys like that the um but the vision they had was was to move a lot faster and so their the options for that were to build their own distribution or to do something that was a little bit derivative well they started to take a lot of what was being done in the centOS uh team and and kind of moved that into uh into the amazon into what they called amazon linux so removing the trademarks doing all the things focusing on the hardware that they had at hand and modifications that they did to their hardware um there's lots of things there's security chips and all sorts of things that you can keep from landing in the actual flash ram you know things like that and uh and so they wanted to make sure that they had those improvements in place pretty pretty much needed them and then they thought you know what we can make this available for customers and customers can use it too and we'll find out more about it we'll get more bugs and we we all know this story right leave all the new centOS so um so the uh the goal there was to keep that consistent for customers and so one year turned into two turned into three turned into four all right and then there was a php so there's php four then there's php five there's php six and there's all these customers who are using it and there's all these people who you know this is the we had the business discussion right earlier here's the data-driven part of the the complexity of the data-driven part is like but we have all these customers who are running php but we need the you know but they're but they're running php four but these customers need to have php five and so then you started to see this like hodgepodge of things that were that were there something that very similarly how it was happening inside of the red hat community and so you know that's the birth of modularity right right there is is finding that that experience well we all had the same experience around trying to maintain multiple versions and software collections and all that and and uh and so it was on Linux 2 became this really complicated hard to maintain way far away from you know way divergent from where it was originally and and so you know like compatibility was was not was not an option and but you know we still maintain kind of the same same structure as the sento s builds and it made it easy to use a lot of the the real originally the real five stuff and then the real six stuff and then well uh real seven and then some of the seven and eight right around different car kernel versions and then so amazon linux became amazon linux 2 and amazon linux 2 kind of tried to push push the gcc much farther and then amazon linux 2022 was coming up and then that structure the restructure we reord um you know covid hit things things happened uh that uh impacted lots of lots of people's expectations and uh and that slipped into amazon linux 2023 amazon linux 2023 ended up for uh i mean we call it working but but branching from f-36 and there was never any goal this is one of the things that that uh that um that max set up max never had a goal of being uh 100 api compatible with uh with red hat his goal was to be 100 focused on customer problems and customer customer experience and to go through that in the context of what was what was the insight of the distribution that said that they don't have so uh you know they have very specific goals around support right they want to support the machine the the machines that they have in in their data centers they want to support the configurations that that are important to the services that are running so that makes it much different than what our goals are in fedora my goal of fedora is to ensure that if there is a if there is a workflow that we can you know we can help a user uh integrate then we want to make sure that that we're helping them to integrate that in the context of the fedora experience right and leading them into the workflows that we are perfecting right so um so like where we might have very specific goals around ensuring that we have like a baseline support for um for butterfests and for you know other other component parts that the amazon linux team will be focused on what their performance requirements are for the s volume is what their uh but what the lambda service needs in order to have a good foundation for container based workflow and they don't care if if like if the package is in is if there's an extra linux you know an apple package that's associated with it that's not important that's that's more like that that's a you know you can go and bring that compile it let us know if there's a problem right um and they do they do a public pfr just like we do and uh and they uh they work off of that uh based on based on the data during the approach right they they prioritize based on customer customer demand we prioritize based on where our expertise is and who's working on a project this morning to do that who's dedicated yeah is that is that a and I think that you know if you looked at if you look at the um uh the work that you know the microsoft gave the flat pack or or you know the standard program that you would find that you may have the same kind of the same kind of experience and we've made some really interesting come we've had lots of really interesting conversations and I can only I mean I can talk about a lot of the ones around amazon just because I'm so deeply involved in them like um like when we were talking to the SDK team and help you know helping them to understand what our what our problems were and how we could you know how we could help each other um the uh one of the great like this is another one of these great moments like like getting a um like having you know someone make the first commit is um we were working with uh Kyle Knapp who is responsible for most of the work that's done on the AWS CLI development and uh Tamash Tamacek um asked if he could be a part of that conversation and uh when he started down that road we started having a very serious discussion around packet and the packet implementation inside of AWS CLI too is the first point it's like a it's a it's the gold standard that I'm leveraging to talk to other teams about how they can uh how they can integrate back into that and you know not just not just Amazon and that Google talking to Zach and and his team at Google and talking to uh David Duffin and his team at um Microsoft to make sure that they understand that we have this react this consistency model inside of uh uh the Fedora Fedora experience that will give us what we mean in terms of in terms of flexibilities and uh uh rapid integration because AWS CLI like literally they release every day if you're paying attention you know Voto 3 is out and then out and then out and then out it is painful uh if you don't have if you don't have a good automated process for dealing with it and Nikola Copa on on Thomas's Tamash's team is uh co-maintenor really primary maintainers as far as I'm concerned with me on that on that so in the project okay great I'm out of voice too it's a good time for me to take a rest any other questions specific technology support something you're working on if you would like to bend the rules on that we can maybe help you bend versus emacs you know I'm an emacs man so my life is all about elisp but uh but you know nano works then works they're all just an install we have that's another thing is that the infrastructure team creates packaging for the for the for the updates and the the repos each one of our individual um individual standard regions so like whenever you're pulling um whenever you're pulling your updates for fedora you're in fact not creating egress charges in the availability zones you're still pulling from the from the same locations we use cloud part as a foundation hey that's your computer not mine that's not you could draw you could do whatever you want to your computer it's not mine it's not my computer somebody else's computer sure so hello so hello everyone um um my name is you know gampa and this is david duncan and we're here to talk to you about fedora cloud kde um this is a bit about all the stuff we kind of do and things uh most important thing is i do a lot in fedora i do a lot in kde i do a lot in in sentos and other places david here does a lot in fedora and he does stuff uh for the cloud and we're here to try to show you what we're trying to do together buddy up with me a lot honestly but uh that's that's uh that's because he's he's a great mentor and super super helper so we've i've learned a whole lot in the context of the things part partly some of the things that we're going to talk about today and then also uh just generally about fedora and and uh and you know it's been a great experience for us to just work together on many things many things exactly yeah so we first want to talk a little bit about desktops in the cloud the thing about you know virtual desktop infrastructure is it's been kind of a holy grail of sorts for gosh i don't know like decades at this point people have been wanting to have to eliminate hardware at the edge for as long as it's been possible to have computers connected to each other network because in the very beginning you didn't have hardware at the edge yeah these thin terminals dumb clients whatever you want to call them and then they would go and speak to like some kind of mainframe or supercomputer or whatever and in the current era now computers do stuff but the problem with people breaking stuff is also still true and so they'd like to have a way to do do desktop computers without the computer part at at the at the desktop so uh and and part of the reason this came up as something that we thought was super interesting to do is because you might want to talk oh yeah sorry so one of the things that we thought this was this was super interesting to do is is i was i was having a conversation just just sort of a casual conversation around use of desktops and and and in in graphics intensive workloads right like film and animation and i was having a conversation with someone around like an architect around this who's responsible for a fairly major customer red hat customer and and uh uh what i like it was pretty animated right and i got somebody from from uh from uh our media and entertainment group and amazon just talked to these talked to them and i just listened right and i thought you know there's a bunch of free stuff out there that we could use to do exactly what's going on in in these you know in this multimillion dollar solution and uh the first opportunity i had to talk about it was to talk about it in the context of of the rel workstation and so rel workstation became one of those things that i was like we've got to beat down the doors and have this available for anybody who's you know who's doing graphics graphics intensive workloads and wants to see how this works and um and it's a thing now it is a thing now yeah it's very exciting to me is that we have and and uh uh you know obviously crafting something like that was really uh interesting because um we had to craft it in a way that didn't um create a cannibalized workload for server just generally in the cloud right you can you if you run anything anywhere um you just run the cheapest one right so we wanted to make sure that people were using this for a specific type of workload which was single purpose right you want to have one or two users for a rel workstation you want them to have task specific workloads that are that are uh that are user centric and uh other than that you know the the the functionality bits is kind of the same and so we wanted to make sure that we had you know the right the right kind of business objectives around that and it led to creating this like big screen around what what the what our what our market model would be and then and how we would you know how we would price that and and um and again with all of this uh there there are where's the open source product yeah where's the open source product to go with it and and where you know how do you how do you create an upstream for that kind of an experience and Neil and I thought about it and we thought you know this is the way we can we can actually make this happen yeah and then from a community perspective there were a number of people had approached me over the years asking about being able to do some of these kinds of things for much more prosaic use cases you know thinking about things like well in libraries they want to have thin client terminals because the computers need to be cheap because the kids are going to break them uh and they'd like to not have to spend thousands of dollars to replace the broken computers uh and in schools where they want to do something a little bit more powerful but they don't want to really expose them to the hardware because you know in labs and whatever they're usually using Chromebooks and Chromebooks are not powerful enough to do certain types of things but you know you still want to teach them in real environments and then we kind of get to environments that are a little bit more interesting and special and that's developing in the cloud for the cloud right so um a lot uh in the past 10 years we've started to see a larger shift towards developer experiences tending away from windows and macOS surprisingly towards linux um as of last year the stack overflow survey shows um macOS and linux basically neck and neck um with them trading blows like year over year and i think this year uh linux actually surpassed macOS in terms of developer preference and one of the the bigger challenges that's around doing a lot of the developer type stuff uh for cloud applications is that the best developer experience always requires a whole bunch of like tools that you run locally in your environment integrated with your id and things like that even the best web ids and there's a lot of good ones like uh especially like you know dev workspaces and open shift and stuff like that and cloud nine and aws right they provide a lot of capabilities and interesting possibility but nobody's building like in the community space nobody's really building tools for that they're building it for your computer because that's the stuff they're interacting with and we got to bring that kind of experience into the cloud as well especially if you're maybe in an emergency situation or far away god forbid you have to work from your phone or an intern or an intern or an intern or a contractor these there's a litany of reasons why you'd want to be able to do this and we wanted to be able to do this with linux because there aren't anybody there isn't anybody really doing it with linux and it seems like a big opportunity which is which kind of leads into why are we doing fedora kde for the cloud well because fedora kde sick which is you know the sig i chair um we maintain the kde stack and we have a great relationship with the upstream kde community uh to be able to provide an excellent experience with that stuff and as part of being able to provide that excellent experience we provide the fresh kde software as upstream is releasing it and we collaborate with them to enable features and capabilities that we feel our users um for our various targeted audiences would actually benefit from and that kind of leads to you know you want to talk about well i wanted to yeah yeah so so uh our excitement today was that we wanted we wanted to demo this but there was a rollback that uh eliminated the wayland support and so we are uh we are we are without wayland support and with a requirement for xorg utils which also doesn't exist in fedora and uh so but what we did do was to create a series of playbooks and uh in configuration that would provide a lockdown desktop for uh for support right so now you can actually create the the um the kde configuration on top of a fedora cloud base image that fedora cloud base image then can be used uh with a security group there's a security group that's created in that security group locks you to wherever it is that you started that um that instance uh from so like i started one today and created it in my account and i have a single ip address that is associated with the the hotel that's um or whatever the hotel network looks like for real i don't know but the uh but the the uh and that you know prevents you from having um something that's just wide open to the rest of the world that locks us down to a specific client now the client that we use is one that uh so it's what i know right so nice dcv is a is the client and and that's what we were excited about using nice dcv happens to be free for use on top of aws and the reason we wanted that that uh experience was because it is free right and the the the excitement for us was that that works in the context of um of the gpu based instances which we can support on on uh fedora and it also works on just general hardware so if you have something that's like a smaller workload or you want to do some you want to do an experiment and uh you know roll something maybe you want to roll something in kde i don't know yeah um like you could also just you know one of those things is like maybe you're prototyping a new lab environment for for some kids in a in a school that you want to have an instance up to like see how that's going to work and make sure everything is good you could do it in a small gen pop instance make sure everything's going to be great then save it and then you know when you're rolling it out for real you can roll it out in the right kind of configuration that would be needed to support it for the kids exactly and uh i mean i know we're all constrained to like lots of security requirements and um anybody who's in the security team in a security team of sorts here you have my condolences i don't know how hard your campaign requirements are and everybody wants you to produce results um even when there's not a security bug especially when there's not a security there's not one so we run into i run into this a lot in my you know in dollar day job where i have an instance it's running it's not optimized everybody knows it's not optimized and they know that i'm running it at you know whatever three percent of its actual op you know optimized use use model and so just very quickly creating a machine image from that and then snapping it making it possible for me to just start it over again from an s3 snapshot makes it a lot cheaper and uh decreases the security profile to something that people can live with um hopefully you know hopefully so those are if i encrypt the snapshot right right exactly so the um the the goal here was to create something that was super simple and then we could we could just get uh we could just get to the next step so we're doing the install on the nice dcv uh but we'll have to we'll have to uh rev that once uh once we get the code for the the wailing support right and the idea here like is to demonstrate this was intended to be a prototype to show you can take the fedora cloud base you can stack on a desktop do some minor configuration tweaks and suddenly you have a cloud desktop um which we you we almost were there yeah and then well neil's working on something else though right and so that's part of the future features part of this so some of the things i've been having a conversation in the background with um kd folks upstream is the idea around a headless kd plasma now uh if you you may have not seen my talk at academy so i'll kind of quickly recap something that i talked about there when i was talking about um fedora kd is that for kd6 in fedora we're only shipping a weyland environment no x11 session all that stuff's gone x11 applications will of course work in the plasma weyland session but we're not doing x11 for kd plasma 6.0 upstream kd will have the ability to be built to support an x11 session but we're not shipping that in fedora um on top of that uh there's been some efforts upstream in uh in kd to develop uh a way for quinn the compositor kwin uh to have a headless mode that can then be backed by rdp to use as the the old ultimate um head for running the plasma desktop so then we can use rdp and have a fully free stack for being able to do a virtual desktop from the cloud to anywhere and this will give us all kinds of other capabilities but the most important part because it's fully open and generic we can have this in any and every cloud rather than being specific in this case with the prototype with aws's nice dcv um i mean you can use it anywhere but the but the thing is is that it's very it's free when you use it on the platform yeah um but more importantly because it's all open source you can kind of see how everything works you can figure it out you can improve it you can learn from it and you can build on top of it and that's that's the key aspect that we want to do with the fedora cloud kd and and we're kind of targeting this to be something that we can start really building out um when we land plasma 6 within fedora and and start working on that headless uh plasma mode yeah and this is like something that kind of dovetails with some of the other things that i think are really amazing like the linux system roles um making it possible for you to just find a way to you know apply this in a way that it's kind of generic yeah and so if you're interested in this kind of stuff like in addition to the fedora cloud sig and working group that's working on this right the fedora kd sig is is actively working with the cloud working group on this front and see our lovely members that are in there and all the avenues in which you can come talk to us we're on matrix we have mailing lists we have an issue tracker come in and join in the fun if you're interested in this topic um so any questions awesome and this is why we finish with five minutes to spare thank you very much for the presentation and i'm a navid kd user in fedora but as a navid kd user uh my question really was from the start why kd in this case i love kd yeah but it's one of the most resource heavy uh desktop environments out there and in the cloud you pay for resources so if it's aws that means you pay more while uh with this scenario i would be really interesting deploying on a vps when you have like four gigabytes of RAM what kd can you think of uh so my question is how much of this work can be applied uh to a lightweight environment like is wm like what we used to use 10 years ago and uh what's the rationale for using a heavyweight environment like kd in the cloud so uh let me pick apart a couple of things here so the first is the thought that kd plasma is the heaviest environment or one of the heaviest um no it's actually isn't anymore especially when you're running in the wailin mode um the the minimum kd setup which is just the desktop shell compositor and you don't have the kd pym services which you don't need in the cloud environment anyway so yeah so if you take all those out um your kd plasma desktop runs with 120 megs of ram and that is basically comparable to lxqt xfce um mate with the added advantage of actually being fully actively maintained and having wailin support and it is supporting all these kinds of new features being able to be hardware accelerated optionally through um things like the uh cloud based uh gpus and stuff like that um the second aspect of this is most of the stuff that we're trying to do will probably not work on your average um x11 based environment because the idea is that we're trying to leverage modern protocols that are network efficient for uh being able to make it so that it's responsive over the over the internet so like the usage of rdp uh as an efficient transport and communications mechanism for the desktop has basically not been successfully done in a reasonably performant fashion with anything backed by an x11 server but with the wailin server you're able we're able to cut out a lot of this fat and have a very optimized method of handling this at the compositor level at the at the renderer level and so that's why this is able to be done and you'd be surprised that how much less resources you have when you also don't have to figure out hardware quirks yeah yeah well yeah and remember that one of the things that we're taking advantage of is we're taking oh yeah one of the things that we're taking advantage of is that you're not going to use this all day right you're going to use it for task specific workloads and then you're going to put it away and and go do something on another another system so you can start stop it all day that means you're paying for the story the monthly like you're in this context you're paying for monthly storage for the volume but then the compute power which is your actual your your your heaviest cost you're you're using it opportunistically right there's nothing that would stop once we have the final version which is generic that would stop it from being used in a tradition in a in a virtual provider cloud system like say linode or or or similar those are those are totally possible to do it like the main constraint is making sure that your io and network performance are high enough in the environment that you're running in that this doesn't like choke yeah and i think i think the the thing that you know the in the context of of what we've done here we have one deployment script for the for the instance that's created right and you can make some modifications in your extra extra arguments right but you don't but effectively there's just a super simple way to do it we can make many right so then the question becomes how do we do that you know what's our what's our easiest path forward to doing the provisioning and the provisioning itself is the is the one thing that we would focus on yeah okay uh i have the question about the keyboard and i always have a problem with things like key shortcuts you you know control out f1 and or f2 whatever and your session is gone and how does it do here so i mean i can give you that answer you want to give it yeah go for it so so there's so there's going to be there's going to be two parts one is going to be based on the client and the client itself will allow you to to grab the whole keyboard right so the client that we're using in this context gives you that whole keyboard the rdp the krdp will also give you right so if you're using the krdc client that is the client side of rdp from kitty there's an option in it to make it so that it grabs the whole keyboard and intercepts all actions before passing it on to the host and so you can use control special characters all the fun stuff or you know alt gr and doing third fourth fifth level modifiers whatever crazy thing you need to do it'll it'll pass it forward to to the to the virtual to the remote session yeah you were in my previous talk where i talked about my love of emacs but i am a keychord maker okay he really is yeah okay and why did you use ansible instead of terraform i know that because terraform is like you know the basic like the most basic tool and changing from one provider to another is quite simple well i think it's i think it's super simple right but um but ansible you know just like falls into the family right so i feel there's there's also the other part of it of like the only thing terraform would actually do is make the cloud-based instance yeah you still need to do everything else somewhere else and that would be done in ansible so if you're only doing one step in terraform and all the other steps in ansible and the one step is trivial you might as well do the one step in ansible too and and then on top of that one of the things that i mentioned was the concept of the linux system role so the concept around the linux system role has been something that i've that's entertained me for a long time and and thinking about how we can have those in in the context of fedora cloud is something that i think is beneficial right i have two questions uh i was really looking for for the demo so i'm pretty sad me too so so can you can you later post something on the master don't i would love to watch that because honestly i will probably not around the playbook i will have no time for that i understand so so yeah i mean i think i think so the answer to that is i talked to palo i talked to the gm for the product and was like hey we're not we don't see the wayland support and he was like oh i'll have to get you the you know the review code and and um and so as soon as we can as soon as we can post it out we'll uh we'll we'll just make a movie on the on the ad hub repository on the and and the second question so so what's what's the future where i can expect something super easy like like linux media variety or something which i just click and got it in the cloud uh or am a me i share yeah so the goal is that once we have the generic version we actually are going to produce a layered image uh product or or variant or whatever you want to call it layered product uh layered layered spin uh on top of the fedora cloud based edition yeah that we can ship and give amy launch buttons and give them steps to like how do you connect to it automatically and all that other fun stuff yeah if you add up so so when when approximately i would be very hesitant to give dates because i don't actually know what plasma six is going to release yeah my guess is is uh let's let's wait until palo tells me that the wayland code is is ready for the for the client yeah and then if the krd krdp is is ready then that's the one we'll we'll choose because we want one that's just functional everywhere right and and like katie the katie community is estimating it has been estimating like we're looking at seeing first plasma six released by the end of the year early early next year so if everything optimistically works out i'd like to have a preview build that's that we we start making available in fedora as a generic image in the summer maybe and or maybe in the fall and then and kind of go from there summer of next year adam williamson we don't make forward looking statements i'm sorry yeah quick question so key rdp is an rdp server right that's correct yeah krdp is a new library that the katie can uh one of the katie folks has been making to encapsulate all the functions of creating an rdp server to plug in as part of the back end for plasma desktop it's not as lightweight as the you know so it's a protocol that we can hear adhere to yeah but here's the question is an architecture diagram how it plugs into ssd for authentication for example uh that'd be kind of awesome yeah i think we're gonna have to have a conversation with with ab about that yeah now there there's some other stuff that we have to figure out like having to having the ability to render through rdp is only the first step we have to also figure out authentication all this other stuff that's why i'm saying like i'm crossing my fingers for being able to do a preview um next summer but i i genuinely don't know like uh what it's going to take to get us to a point where this is something that i'd be comfortable with say this is a generally useful thing that you can kind of sort of maybe rely on building your own versions of to then roll out for your own thingies and then somebody might want to do something more interesting with it down the road no no yeah that's right wait but at the moment uh so you spawn a vm uh it runs key rdp uh but how the actual authentication of the user session happens well right now we would be auto login yeah so we don't we don't have a better way of dealing with this right now so there is simply no way to protect the vm from unauthorized connections okay so the nice dcv actually contains a client we can create base logins on the on the box and rdp itself also supports handling user password logins as well as kerberos logins but we have to hook all that stuff up yeah yeah the plumbing the plumbing for that would be like second second generation yeah yeah none of this stuff is simple but i like your i like where you're going there are other things there are other things that are awesome about that like the sssd implementation with like incidents connect models from different public clouds would be really cool and that would actually be super nice because then we can plug it in through pam back into sddm and then verify and authenticate the login session automatically through your local credentials i think i've got like a 700 day old bugzilla with amy farley about some of the things that we could do there actually amy's problem with dorms yeah yeah i don't know who would be doing it now no but that's the way the way it's still now yeah yeah cool all right people came okay actually i hope we have enough seats okay so i'll just go ahead and get started um oh okay oh is the camera not we good we good are we streaming oh god okay all right there's seats up here if you need to see there's two seats left um so here today i'm going to talk to you guys about podman desktop now i made this proposal and the um cfp review committee was like hey mo this sounds great what does this have to do with fedora so i'm going to tell you what this has to do with fedora i've never given this talk before and it includes a demo that may not work because i'm having computer issues so i just wanted to caveat with that too um so i'm mo duffy i'm a ux designer at red hat um i also work on the fedora design team um and i work on the community design team um yeah so this is sort of my manifesto for fedora if you were in matthew's session this morning i'd asked him about strategy and stuff and this is sort of what i was getting at so like in fedora and i don't can i walk to the screen am i on camera if i do that i've read in butter it's our largest user base uses workstation or one of the desktop spins but you can do so much more with fedora as a technology so this is i have a blog post if you want to read it at some point the qr code is there but this is what i call the fedora f model and the idea is yeah you know plenty of people can just sit here use fedora as a desktop be perfectly happy they don't have to do anything else but if you want to explore you know the latest and some of the os bits that we're building and what you can do with it you sort of proceed up the f maybe we will proceed down the the middle four the orange thing the orange thing that is where this talk is going to talk about um the pink and purple one at the top is more oriented towards um internet of things we have fedora iot and edge stuff i'm not going to talk about that but that definitely comes into play i'm going to show you pop in desktop and that actually has um um a micro shift plugin so you can do use you can deploy images to a micro shift and play with that so i mean we could fill in that a little bit more but we're just going to focus on this orange line and basically the use cases um developers who are using containers to um write and then deploy their their apps so this is how it's sort of all relates to fedora does this make sense like and when i was talking to matthew i was like saying we should focus towards some of these kinds of use cases too because then the the when we were doing the fedora website redesign you know and i was talking to different fedora users a lot of them were like you know i use fedora workstation and it's great but like what is fedora core os i don't get it and it's like it's hard to explain but if we sort of talk about fedora and model it for people more as being this broader f than just this down here i think we can kind of teach people and educate them and get them excited about it and understand what it is because i think we all understand quite well it's onboarding people that is where we have the problem anyway so i i put in the the talk description that this is beginner so i see a bunch of you in here that i think are going to be quite bored by this but this is how i sort of explain to people what are containers so you sort of move from bare metal like a computer is a computer like this is a computer it's a physical thing and you run software on it the apps at the top so then you get into virtualization right so you have the operating system on your actual physical computer and then you have a hypervisor on top of the hypervisor it's basically let's make many fake computers and run them on one real computer so that's virtualization so canerization is let's create a operating system installs that it's not even the full operating system it's just components and i'm going to lock it off using um cgroups and make it so one computer seems like many but it's not and it's totally fake so the one way to call this is like abstracting into the space almost um and another way is just it's just making lots of fake stuff and just layering and layering and layering so this is how i like to explain containers um so podman is is we ship podman in fedora it is the main way that you work with containers in fedora podman itself if you're running it on a windows or a mac and see here we have the host operating system apple emoji or windows emoji it's based on virtualization because i think apple and windows they don't support either podman or just containers in general containers are linux right so they need virtualization to be able to run linux to be able to run these these container engines so the vm that provides podman on um on max it's coro s on windows it's fedora right now and that's because of a ws l issue but when we come out with hyper v support that will likely use fedora coro s but when somebody talks about using podman and they talk about using podman on windows or mac they're fedora user so just remember that like again what does this talk have to do with fedora well let's onboard fedora users that use mac and windows why not right you don't have to use workstation to be a fedora user you don't have to use linux on the desktop to be able to use linux so we should embrace these users and help them and bring them into our community anyway so at the top here where i have the purple delineation and this this shows you know podman is native to linux containers are native to linux it's a simpler model but where you have the purple there that's basically what podman is so the apps on the top are um your individual containers and podman supports everything you need to run them so oh and i do have a little cultural note the three seals and podman are selkis that is a irish myth about seals that turn into women their names are katlyn margaret and rose so cultural relevancy okay so now we'll talk a little bit more about podman machine here um but actually i already said all that so i'm just going to skip this slide i've not given this presentation before so i'm not super smooth here okay so now podman desktop podman desktop is a developer oriented user interface that makes all this stuff approachable um podman for many years there there was no gooey to work with it so you had to work with the command line and that's fine that's great you can script things you can play around with things someone like me i'm more visual i'm less deep into the lower level tech i still want to play with containers and build apps and deploy them but i maybe don't want to be sitting in the command line all the time so podman desktop makes it very approachable podman desktop also just like podman runs on linux windows and mac so we also can get fedora users basically by a podman desktop so again just relating it to or there you can you tell i was a little spicy about being told what does that have to do with fedora um the other relationship here with fedora and podman desktop and podman is um podman is run in our open qa for fedora and even though i don't think it's in one of the core packages but it's still considered a package that will you know stop a release if it's broken it's really important that we have a functioning podman when we do a release because so many things depend on fedora core os having functioning podman so we're actually looking right now and i've been talking with adam williamson about adding some tests to the podman tests in open qa to make sure that podman desktop works because when podman desktop has a problem it might have a problem that finds a bug that just the just the standard testing that they do right now wouldn't catch so okay so now the next level of abstraction so these are the the layers of abstraction that we talked about before so the next level and yes i would caveat i'm not an expert i'm still learning especially the kubernetes stuff so kubernetes is let's make a fake super computer i don't know how accurate that is but that's how i think about it and basically you're just taking each one of those blocks where you have the host operating system the container runtime the containers and you're just multiplying them and you're calling each one a node so you can scale an app up or down based on demand you get all sorts of different services from kubernetes you can cluster together different things but it's basically building a fake super computer of all fake computers that somewhere sit on hardware you don't know where the hardware is you don't think about the hardware so again it's just another you just think about going up a layer cake right it's sort of the thing on the top okay so what does this kubernetes thing have to do with podman desktop and podman um the idea is you start with a container you're building an app you're running podman desktop locally you're playing around you might not even building an app my main use case for using podman and podman desktop is running pen pot with which is a ui design tool and i like running it locally because i have all the files locally i'm not relying on a hosting provider or anything i own my materials i i really like that so i like to run it locally using podman um and it's not i didn't write the app i'm just running it um there's two ways when you start getting more complex apps you might have multiple containers an app you might have a container for the front end a container for the back end container for the database um um docker has something called docker compose that lets you group uh containers together into larger applications but it's not really compatible with kubernetes um there's also what podman does is it has the concept of it creates a pod using a format that is compatible with kubernetes so if you do things like there's two options here there's the blue box at the top and the purple box at the bottom if you go the purple way it's sort of easier to get to kubernetes because it's already formatted that way if you are working with an app that uses a docker compose and pen pot would be an example they ship with a docker compose file um podman and podman desktop can convert it to kubernetes format which you then can deploy to kubernetes and once you go into kubernetes you have different levels of things you can do um open shift local mini cube um k3s kind these are examples of really scaled down versions of kubernetes that don't have most of the services you would have in a full blown cluster but they let you have a local kubernetes environment that you can play around with to see how your application would work in that sort of environment um and then you have things like open shift and of course i have open shift on here we're for red hat i mean come on but you can do things like manage services you can push out to red hat open shift and be like hey red hat i'm just working on this app i don't want to do the system and thing i don't want to do cluster admin you keep it running and i'll pay you so you can you can have that option just pay somebody to do that stuff and i'm just focusing on the app so once you you build containers in a way that you can deploy it out you get the benefits of being able to scale you get the benefits of i have this awesome app i want to focus on the app the whole scaling and and multiple locations like i just i don't want to work on the complex stuff i just want that provided to me as a service if you run it on kubernetes you get that kind of stuff and a lot of hosting providers are kubernetes providers so it's something that there's a rich marketplace that you can go and find a provider to do that so that's sort of the whole ecosystem and podman desktop has plugins that will let you push to kubernetes and that is the thing i'm going to attempt to demo but to a local kind so let me get to okay so and it's funny um podman and i think other container engines too has this thing where it comes up with a name and i've been having trouble with my machine because of assorted issues it came up with the name hopeful panini so i and it actually ended up working when it came up with the name so i don't know what's up with that but here well when you start podman desktop you kind of get i have the open shift local plugin loaded so it talks about that this is the podman that i have this is the dashboard so i actually have a lot of things running on here i can show you real quick um where is it it's i can go in so this is a compose so it's like a bunch of different containers that were generated by a compose it's not potified because i tried to potify it and it was some network issue and i just didn't want to play with it but um i can open up the front end container and then this is you can see a local host i'm not cheating i'm not cheating right um this is actually running on the system using podman desktop and i can go in like i have the fedora character library in here and i can just sit here and work on penpot and it's all on my local machine i do this a lot on trips like this because i can work on stuff when i'm on the plane it's awesome and then once i'm off the plane i can upload it somewhere it's very nice but anyway so i have that but let's get back to our hopeful guy so hopeful panini i'm going to hit play and that's going to start the container it's just a like a it worked it's a very simple um web thing it works yay okay now this is just a container it's not a pod or anything like that it's just a plain container so what i'm going to do is i'm going to select it and we have a feature called podify and it takes the container and it generates cube yaml for it and it makes a pod so i'm going to hit here it's going to create a pod with this um i'll call it hopeful panini pod it's a nice alliteration all right now i have hopeful panini pod and you can see here oh sorry i didn't delete my other demo stuff that's all right so you have hopeful panini pod and you can see it's running on podman if i go on this list then i do hopeful you'll see what it did is it took the original container and it stopped it and it made a copy of it and it put it inside a pod and it called it hopeful panini podified so now if i go to it and i open browser yeah it works okay so now the last trick i'm going to do hopefully to work is i am actually going to deploy it to kubernetes so let me just show you what i mean by kubernetes here so there's something called kind it stands for kind kind or kubernetes in docker but you know it's running on podman right now and um it's it's basically a little a very small version of kubernetes um that is running locally on my machine and you can see how where it's running here so i'm going to deploy to that so i'm going to take again i'm going to type hopeful and i'm very hopeful this will work so i'm going to call it yeah hopeful panini pod to kind and it's on the kind cluster it's going to all right i'm going to hit deploy is it working yep oh it's already running we okay so then i'm going to go to my list of pods and you can see where did it go is it this one i think it's one of these but i don't remember what i named it but anyway so you can see that it's running with the tag it's running in the kind cluster and if i do um i think it's uh cube control get pods you can see that it's running so just to verify um so yeah that is just one example of what you can do with podman desktop um i have a bunch of stuff in here the thing that i one of the things i wanted to demo that unfortunately i broke um my blog is on dreamhost and i'm not happy with them right now and it's a wordpress so what i've been working on and i'm hoping to do a tutorial on it soon is i took um i basically containerized wordpress and i made it so it will generate a static site so i'm going to run a wordpress container locally and use the nice wordpress ui to write my blog and upload the images and all that stuff and then generate static html from it and then i'm going to have like a git commit thing where like a post commit hook where it pushes out to a github um and do my blog that way is a static because like number one like wordpress gets hacked a lot um number two i'm not happy with dreamhost and i don't like being stuck with them um and uh it's it's a fun project but i did something to it and it broke and i can't remember how to fix it but you can do little projects like that and that's one of the things that i have in here um and i'm trying you can grab images like i can show pulling an image let's pull a really good one i hear it's nice you can just type fedora it'll look for it it'll pull it down it'll add it to your library um yeah i don't know i guess what else does anybody have questions was anybody hoping to hear something today that i didn't cover what do you think about my um my big f somebody's got to have opinions on the f you just pronounced it okay in this city the ports come up with all right so q u hello q u a y is key as in k e y um and paul karmier will dispute that fact and but i will dispute it back there's no there's no q in irish it's it's key it's key it's not quay i o it's key so tell all the red hatters it's key not quay but but how do you spell it as quailga because there's no q oh i don't know what it isn't as quailga so see it's not really a q but it's a key yeah yeah tell paul karmier that paul paul paul and the founder of quay i o keeps in its quay and it's key just pointing it out there i love the f as well can we get another f's for another parts of journey in fedora like server um so where would you see server fitting in so server cloud and all the other things right where do you see them fitting in what what do you think definitely not on this f uh at least not in the middle but on the upper one as well as um i guess we we can start with the server instead of desktop and go the same way that makes sense actually have a separate effort but you got it you got to let it you can't force the f this wasn't originally an upside down l and then it became an f and it was like but you have you have to you can't force it it's just got anyway sorry right grow naturally thank you well come on somebody has opinions about this are we apathetic about it are we agreeing with it i'm not a developer i'm a technical writer but um my pod man um i have a text file with commands and little explanations so i mean it's certainly more helpful than that yeah that's i'm a visual thinker so i just find it nice to have all my stuff there because i'll i'll do a little project and then i'll like not have time and i'll come back to it and having it right there it's very nice yeah no and your demo was great and it's i'm certainly gonna look it up now it's like i said it's an order of times better than my text file of copy paste commands i should put this up um we have podman desktop dot i o and we also have podman dot i o um i do want to mention that ashlyn knocks who also was one of the lead developers on the new fedore project that org did the work for podman dot i o as well and we followed a very similar process um and it's github.com slash containers slash podmandas desktop so if you want to join in we're an open community it's an open source project we would love to have you and if you have any feedback we'd love to hear it any other questions ideas jokes no hey oh here we go christen this might be a little left field but we'll see so i don't use Kubernetes but i do use a lot of podman at home to solve hostings uh and obviously i'm just doing that by logging into machine running podman the deployment that you showed that i really like that idea of being able to just push things from my machine that i'm working on to the rest of the house but it's not kubernetes right what are the other deployment options i might have so i can't speak to the support we have right now for this in podman desktop however there is a podman remote that you could use so for example you could set up a home server this is this is what i was telling matthew today after his thing is like we got to do this and we got to make this a thing um you can set up coro s on a system and you know it has podman on it and you can do from your local system podman remote and push the containers to that system now i don't i i don't want to say that we can definitely support it in podman desktop yet i think i saw somebody had a hack to do it but it's not definitely something i want to try to get on the roadmap but generally just from the command line you can do that okay and i think it's a good middle ground you know and if you're doing some kind of home network thing yeah you don't need to do kubernetes unless you're really hard for getting rebuilt yeah but yeah there's there's definitely options i know the other thing is if you did something like we have something um called developer sandbox red hat developer sandbox and it's free it doesn't cost anything it's an online kubernetes cluster you know it's not like super power or anything i mean it's a free account but you can deploy stuff to that too and podman desktop has a plugin for that so you know i could have demoed that today i chose not to um because i don't know i just wanted to do kind but um that is another option like if you just wanted to play with kubernetes but you weren't really sure it's something that you don't have to deploy your own thing so yeah i wouldn't mind playing with kubernetes a bit more but in this case my use case is can i make it easy to manage the home stuff yeah that doesn't work there but it's still useful information yeah yeah no that's a fantastic question thank you we good time for one more yeah this is actually like more of a comment and first of all thank you um great presentation very informative um just with the super computer thing i i don't mean to be like you know like super computer is a term i know and it means i don't know it just it just got stuck in my head like maybe distributed operating system yes you know what that is much better and if i give this talk again i'm going to use that phrase because that's really what it is awesome i love it okay no that's brilliant all right thanks everybody i want to take one more picture too today we'll be talking about suey suey's being said chair about his things about uh the topic so first of all when i i've been in the linux space for quite some time min fedora contributor since 2013 and then since 2016 suey package and well use that as well um so let's start from the very basic what's suey why we're talking about this it's a dropping replacement of i3wm uh dropping is the definition of the project that does not mean that is actually dropping nowadays it's way closer to that idea at least but it's not like perfectly dropping in some very edge cases why have it at all because it's for weyland so i3 did not wanted to migrate to from x11 to weyland uh i'm not entirely sure if they changed their mind in between but back in 2016 they were like nope we are x11 we are targeting that platform we have a lot of code specific for x11 therefore we don't care about this new thing uh that hopefully uh will change everything but maybe it will not it's based on wl routes uh so effectively they uh the the suey developers wrote also a weyland compositor at the time there was uh don't remember the name of the genome one but they did not want it to do to use that one uh because they believe that it was bloated and not up to the level that they were expecting so they wrote one from scratch uh with the idea of creating a very small implementation which at the moment i think it's around 60 000 lines which is not that much to be a weyland compositor but to be fully compliant with all the extension that weyland has and had at the time the first comment was in august 2015 so we are at the eight year mark uh from suey now and interestingly enough we have it packaged since february 2016 in fedora and the first release they had uh upstream was in march 2016 so effectively we can say that fedora was probably the the distribution that has been had uh heading suey for the longest period of time and i have to say that i have been using suey for why sometime because i packaged it a couple of weeks after i moved to suey so it was not very stable at the time i used to also have i3 installed because well it tended to crash version 0.0 but still uh it grew very well and over time it became very very stable now the history between fedora and suey has i would say three big parts the first one is what happened before fedora 38 then 38 and then 39 in future so before 38 there were multiple people that had their own spins or remixes based on the specific versions i had mine as well i think i published officially so i had it mine based on classic fedora since 2016 then in 2017 probably i moved it to rpm os3 i published it on a git repo but never commented too much about it because it was basically my version of fedora and then a couple of years ago i published it also with the blog post and telling a little bit to many people what actually was trying to do and that kind of things and in the same way other people had tried to solve the same issue so in may 2022 we decided in a bunch of people to create an official spin for fedora 37 for suey that was the initial idea but it did not happen uh it did not happen because we didn't we did not we were not very clear with ourselves on what exactly wanted to put in that spin what way if it should have been a very basic spin or a more complete spin who was our target user and so on so we used a little bit more time than the fedora 37 really cycle allowed us to do so we ended up with fedora 38 so in fedora 38 we proposed the fedora's way spin change that's also a link it does not feel like one but um you can then find the the slides in in scada i'll upload them there and you can click a lot of things in the slides and this change uh was about two artifacts the first one was the creation of the fedora's wise pin and to be specific the biggest change between you know fedora was station and uh the swice pin is around the packages so the packages we ended up with are obviously the suey packages so suey suey bg idol unlock which are the very basic suey packages that you can have uh danced for notification demon uh food for the terminal and that was kind of a long discussion because we have a lot of different terminals but well in the end you only have to pick one so uh slurping dream to do screen screenshots so the first one allows you to select an area on wayland and the second one to screenshot an area on wayland i am v for the viewing images country for a dynamic output configurator mpv for uh media player tuner for file manager and stdm and stdm x11 for the login manager now we'll be talking a little bit more about the last one but as you can see from the list we opted for a fairly complete version of the distribution in the sense that we are providing the majority of tools that a person might be using there is also firefox it's not in the list but uh it's there as well and all of them or all of them except the last one um are wayland specific so for instance foot uh it's a wayland terminal emulator i am v same thing so whatever we could choose between the next 11 native program and a wayland native program we opted for the wayland one the other thing that uh we proposed in the same change was fedora sericea now a couple of comments on this first of all the name the pronunciation i've seen that many people struggle with it is either sericea or sericea based on the kind of latin you want to use uh classic latin would be sericea ecclesiastic latin uh would be sericea i use the latter because i'm italian in italian schools used to study ecclesiastic latin not classic latin but uh either would be correct the reason is that fedora sericea or sericea comes from terminalia sericea which is a plant and plant's names are pronounced in latin so why is the terminalia sericea first of all because sericea started with s in the same way uh that s why it does and uh so it was like a nice thing the second aspect is that terminalia sericea is a tree west tree is a tree so we found that similarity as well uh also uh it's the common name would be a silver leaf which was also one of the options of what then became silver blue uh so it was also a nice reference there and if you look at terminalia sericea you will find a lot of similarities with the sway logo and you know the first one is the sway logo and the second one is the wikipedia image of a terminalia sericea just uh mirrored because they photographed it and the other side but um as you can see there are a lot of similarities so we thought that that was a good thing and therefore we opted for uh this thing also speaking about logos we also got a logo and this is thanks to Emma Kinney uh which is part of the fedora design team and she was very patient with us uh with all of the changes that we asked them asked her multiple times and so we also have a logo that is also very similar to the previous two images uh that we have seen but uh obviously in the fedora way now uh we also worked a little bit on website things uh in the fedora 38 time frame so first of all we managed to create the spin page it's a little bit weird because until fedora 38 effectively uh well until fedora 37 all spins had a page on spins dot fedora project to talk in fedora 38 some spin moved uh or the new spins started to have pages on the new website uh so we moved uh we went directly in that path uh and then i think that uh now every spin will move there as well so we tried to already be on the new website just to avoid the double creation of pages then we also had the fedora 38 page uh on the website as well i'm not sure how many derivatives or editions have the page or had the page at the time but for the same reason as before we tried to already adopt the new standard because it was also easier for us and also we created some documentation around this there are still a lot of things to document mostly because because sericea and sway specifically um are wayland only which means that for instance if you have an nvidia card things might be a little bit more complex uh now hopefully in the next few releases everything will become smoother and even nvidia drivers will work perfectly i really hope that but in the meantime it's you know can be a little bit rougher there so for fedora 39 we proposed one change and then we inherited one uh which is always good um the change that we proposed got approved and implemented and i think is real since three four days ago uh row height build so it should work have not yet tried on my laptop because i did not want it to break two days before post them uh before flock uh but still it's sericea xorg less the idea here is that looking at our three of dependencies we are still importing xorg which is not very good for um sway that you know in theory should be xorg less and the reason for that is that we are using a stdm x11 or we used to and by moving to stdm wayland sway we can actually drop completely the xorg dependency so this is what we have done and now we have the builds without xorg which is always also better because there were a bunch of weird bugs uh in the logging page due to some not perfect configuration of xorg which we thought didn't really cared about because all the rest were worked without xorg so um we also fixed a little bit the user experience there as well the other change that we got but we did not develop so it's always nice because you know you get it for free is the os 3 native container uh change which is i think driven by calling walter uh and uh other fedoras uh people uh so now you can actually download fedora sericea uh directly from a docker uh repositories or image repository so now for fedora 40 we start to have some ideas uh those are not like written in stone kind of ideas it's more like yeah we have thought about those kind of things uh i've discussed it with a couple of people uh but nothing is written in stones there and the first one is the unified core mode uh so i think in fedora 39 uh silver blue and uh keynote are going to move to uh this unified core mode which basically is the new suggested way to create rpm os 3 images the classic way is now deprecated uh we are still using the classic way so it would make sense to actually move to the non-deprecated way sooner than later and it should not be a huge change uh mostly because due to the fact that in fedora 39 uh silver blue and keynote are already doing it if they are going to be successful and i really hope that they are we will just have to copy whatever they they have done uh in the previous release so it should be something reasonably easy to do and the other thing uh that we have been discussing even though there is no clear consensus on anything exactly around us is more flock box and this is more on the city chair side and uh the this wise been and the reason is that for instance i'm not very happy about having firefox in the base image uh i don't like it to be there uh also because on how uh firefox depends on ffm peg and a very specific version of ffm peg which has some limits and how the rpm os 3 behaves um with ffm pegs substitution of packages um which makes everything a little bit more complex so i hope personally to be able to move to uh maybe firefox and other applications in flatbacks i think it would make more sense uh but i also understand that at the moment is not something that is probably going to happen for fedora 40 maybe it will happen later uh but still adds something i'm thinking about and even here i would expect silver blue to be the first one to move to flatbacks many many applications before we actually uh do the thing we are a very small group of people doing this and also we are active in other parts of fedora and we have other daily jobs as well so in the end we tend to be a little bit trailing on changes is like oh we are seeing a very interesting change in silver blue uh let's see how they are going to implement how is going for them if it's going to be successful successful for them then we are also going to do the same thing uh so that we have a little bit less burden there so um are there questions uh i was just wondering what would you say was your biggest challenge in bringing up a new spin so the change proposal part i suppose is kind of planning and sorting out what needs to be done but then actually implementing it were there any big challenges that you faced yeah yeah so uh well first of all the change itself i would say it's mostly three things one is uh comms so effectively groups of things one is the kickstart for the swice pin and the third one is the rpm os 3 part uh of the build of the three the most problematic one has been the rpm os 3 part and the reason is that at the moment we don't have that many rpm os 3 editions of fedora uh so the process is not really very documented and it's also not very straightforward i think that we did changes on three or four different git repos to actually get those images properly built and uh tagged and shipped as you would imagine uh them to be well for instance kickstarter it was just one file in one repo done well and one reference to that file but in the same repo and and that's it so effectively that part was a little bit more complex also because at the moment there are some uh i think there are five artifacts that are delivered as rpm os 3 uh some of them are built in a certain way others are built in a different way so it was not that easy to just say oh uh let's speak how uh iot is done and then do the same because actually iot is slightly different so we ended up picking i think kinoid copying what they were doing and then trying a couple of times to make things working properly cool maybe just like a slightly simpler question i have used sway on fedora but it wasn't from the spin does it ship then with some like basic key mappings like pull up like different applications or is it just the default ones it's just the default ones in the sense that we have thought about doing also the configurations sway configuration way and way bar configurations and is very very close to the upstream ones we have thought about changing them uh much and then we decided not to uh mostly because we did not have good ideas on how to improve them uh so it was like well if we get some someone from the fedora ui uh xp uh x team that tells us oh you should do this and that okay we can do that but we are engineers so uh not not the best kind of people to to make those decisions and also i think that the reality is that 99. something percent of sway users will have their own configurations that was going to be my follow-up i think most people just pull that in and yeah yeah because it's highly personal like to each developer yeah yeah and that is also the reason why we're at the beginning thinking about uh providing a version uh mostly in the sericea part there pmo s3 part without any application or very close to it uh because the idea was i since everyone wanted a different term in a different image viewer and so on why should we make a choice uh can't we just ship something that the base part it works for everyone and then everyone can build on top of it then we decided to be a little bit more user friendly than that uh but still it's fairly easy to to put your own applications and for instance uh for me it was a new thing imv i used to use fee but in reality they are so similar that i moved to imv fairly quickly okay thank you thank you and there are also a bunch of uh links uh here and i will upload the slides on uh sked so that you can click stuff oh i i didn't know i was waiting for somebody to do an introduction or do i start all right okay that makes it easier i mean ideally people can join from outside but since this was just approved on friday and i just announced it today i think the odds of someone actually watching it are going to be relatively low but yeah like i said i mean i poked a few specific people and i announced it but i you know that was a couple hours ago so it's early enough in north america still the no no rush um all right so the this is going to be incredibly informal um so uh i have a couple of slides but the idea behind this is mostly to get some like minded people in the same room um and talk about what we're trying to do um and uh hopefully move from there um so some of this is also around uh sigs uh we've been trying to a combination of reviving some old sigs creating some new ones um and uh because the a lot of the old ai and machine learning sigs were activity less um so we were trying to revive some of that um the um there's been a renewed at least a recently a renewed effort to try and get the more the heterogeneous compute stuff in fedora um in particular um there's momentum around the amd's rock m stack um there's question of whether that should be the same sig whether it shouldn't be um sorry because this was just accepted on friday um i did not have a whole lot of time to prepare um so there's like i said that's going on is rock m uh we've got several packages um approved i'm not sure we're going to hit it before i don't think we're going to finish before uh fedora 39 branches um but i mean i'm always up for being surprised in that area um one of the the targets is to have enough of rock m so that we can have accelerated pie torch running on fedora um and i think there's some other work being done on particularly blender um to try and get blenders uh functionality for accelerators working with amd hardware um there are some very early plans around trying to get pie torch uh packaged in fedora um they're i mean it's it's early enough that the the few things that need to be done um are mostly in fedora discourse instance like one thing i wanted to get i don't know if anyone here has feedback for um there's been there's been some uh discussion on how to do communication um i requested a matrix channel um an irc given the number of people who have chatted in that it's either not known or nobody wanted it in the first place um i'm unclear does anyone have any thoughts on is matrix even something something people you know still want i mean there's no one in irc either it's just a it's dead and has been it's being recorded i'll call out red headers anyways right so there are people at red hat who have fedora counselor involved in fedora um who there's an internal slack ai channel that people post on where they could just post this stuff on the public one or in either the chat or i have been you know gently nudging um for a while i don't know i think more gently nudging would yeah but i mean that's that's for the internal stuff i don't think everyone here is i don't mind they're being an internal thing but a lot of that internal stuff could be brought out to there and i think that will help and then once that will make it not feel dead and then people will feel like it's a place to me but other people who are not red headers i'm not blaming you if you are a red header who should be called out here yeah i don't but other than the two of us i don't think i see anyone who's been a regular on the ai slack channel um but yeah um i will then drop my mic and let somebody else respond it seems like a lot of silence so it's either people don't care a whole lot or i mean there's no violent objection um or one of the things that i've honestly i i didn't come up with it i don't remember who did um of fedora being a duocracy you know it's only going to happen if you put boots on the ground and actually do it and if no one stops you then it gets done yeah that that sounds something yeah that sounds like something i think robin would say but um yeah i guess i don't think i remember caring from somewhere but it's been long enough i don't remember where so uh i guess we'll just kind of keep going with that um the other one has been a bit around and this is a some of a larger discussion around mailing list versus discourse um i think the current plan is to get rid of the mailing list that isn't being used and just kind of keep things on discourse because that seems to be the direction things are heading in um regardless of what anyone feels about mailing lists there are practicalities involved um that i think is really had really pushed me that direction it's just it's the maintenance burden of mailman on the info folks um so if there's no i have the thoughts i there seems to be more activity on discourse than there is uh in matrix at least within for the the aiml stuff um one of the the other things that the i didn't make a slide for it um so how many folks are interested in more of the aiml stuff versus the heterogeneous compute and anyone the other way around more interested in the other the more like the the accelerated blender um that kind of stuff or not just uh you know rock m uh like entails one api would fit into that as well um there has been some question around whether they should be the same sig or different sigs so i'm just trying to get information for that um i don't know who would lead who's going to well that's always the question this is who's going to put the time into into running all of it um i don't think we have an answer for that but the voices who most wanted to see them separate are not able to be here so the silence in this room does not necessarily make consensus yeah like yep sorry yeah i just i don't really care i just don't want there to be two small groups that are never able to find a time where they can both get together never find enough people to do anything when if everybody would make one combined group there would be better yeah and so is there's just more i think a little more discussion to be had because there was one person in particular who wanted to see them separate i don't think anyone else cared so it's going to be honestly i think it's going to be up on him if he wants to lead a separate group then i'm not going to stop him um see uh there's more stuff of you know is this are there other things that folks are interested in these are the things that came to my mind first in terms of you know both the ai machine learning and the the heterogeneous compute some of the stuff is close to another's one api is quite frankly a ways away from being packageable in fedora um there's stuff that needs to happen upstream before that's something that could even go into the repose so that's further out um like i said rock m hopefully we will have enough to run pie torch on by the time fedora 40 releases at least at the rate we're going um and pie torch is an ongoing uh discussion there is quite a bit of work to be done there in terms of dependency packaging in terms of questions around how we can do it what we can support um you know whether you know whether we like it or not nvidia is the 800 pound gorilla um when it comes to the scientific computing um but the way they license their software it's not something we can distribute in the fedora repose so um you know can we build things in such a way that you could just install part of it from like rpm fusion or have directions so that you could download and do it yourself there are there are a lot of these questions that have not been answered yet they're kind of on the list of stuff to do for pie torch do you know how far off their the driver open source driver they release was supposed to be gp computing focused in i yeah it depends on which part my it's with my experience with working with nvidia stack for um aiml the driver is not the problem um the biggest hurdle that i've always had is the the codnn the the neural network specific stuff that's built on top of kuda that has the most stringent requirements on you have to have you know some version some range of version of gcc some range version of um glibc and all that kind of stuff um where i don't think i've ever the times i've looked at it you can't find those in the same release of fedora that's currently supported if that makes any sense so the problem with like pie torch and whatnot on nvidia at least in fedora it's not the driver part it's the c u it's the kuda to a certain extent but the c u dnn um stuff that's built that's the neural network specific stuff that's built on top of kuda does that answer your part of your question so okay thank you i some of this stuff is possible we could you could ship that here's a container or a container for somewhere someone had drivers probably harder to deal with that way the i mean there's always there are ways um relatively simple ways to people can install the the closed source drivers um on fedora the someone had responded to one of the congress one of the discourse posts recently that um like there used to be a way to do it so basically the something called nvidia docker or docker nvidia i don't remember which way it was but basically it was a mechanism through which you could expose the gpu to the container and then you could have all the proprietary stuff with all the pinned versions of things you need in the container and run that on a system that just had the nvidia binary driver installed um we can't really do docker and fedora anymore so that stopped working but there is a new project that is more generic that should be able to work with podband as far as i know um so that is as far as i know someone is working on it but i don't know all the details so hopefully that's coming that will be a way to do it are there other things that fit into a ml um the heterogeneous compute that people are interested in other than the stuff that i've kind of been talking about these are the things i know of which is because i don't know about it doesn't mean it doesn't exist wait for the can you wait wait for the room do we have an object you're defined for the sake or is this the discovery phase of it more of the discovery phase more it's something that like i said the it's a combination of revival and creating um the the old sigs were dead we're trying to create something new that either replaces or repurposes existing sigs so effectively no i mean that other than what i've talked about the trying to get pi torch to the point where at least it can be accelerated on fedora without having to go to binary blobs so getting to start off at least with rock am i i think it will really help if we have an objective that can evolve over a period of time right so um if we have an objective then you you can attract more people um it need not be set in stone can evolve right so that's one thing the other thing that i wanted i was interested um i mean this again goes back to the offline conversation that we had about the infrastructure for testing some of these stuff so you want to bring that up or um yeah i mean it's there's still a lot of questions um one of the things that has become pretty obvious as we're working to package rock am is that it needs to be tested in an automated way um it's going to be somewhat fragile enough that trying to do all of it manually is not going to end well at least not for my sanity um so there are there is an open question on how we can test that stuff um the whether technically there is an amd gpu available in amazon's cloud that they may or may not allow us access to um i had problems getting to it personally okay i like i don't know it like i had i was very irritated with how difficult it was for me on my personal account to get access to the the instance with the amd gpus and there's still an open question of whether it's good enough for rock am because it's several generations old and isn't on amd's official support list for rock am i thought i saw one that was much was newer just amd they have brand new n video stuff no there was a newer oh really yeah i can't remember which one it was off hand though and i'm too tired i thought it was like a like the v520 is the one that they have i thought which is like no it was yeah it was much larger numbers than that but i don't remember the details okay um but david duncan is here from amazon and if anybody can hook us up it's him okay i mean if there's if we can do this cloud that would be much easier than anything else i can think of because the rest of it like i said there's there's a lot there are open questions on you know what system we would use to test it um my first thought is open qa if we do that is there room you know in the the racks that they use in um in uh in virginia for for fedora infra you know is there funding for the machines that our gpus would be in can we get the gpus to test with is this a good way to route or good route to go there's there's a lot of open questions that i don't have answers to um i think the other thing for kind of connecting this to your list of what other people think things that people are interested in um i note that matt hicks is interested in the open shift data um science kubernetes um gpu computing thing and so having fedora provide a good experience that ties in with you know kubernetes and the cloud for doing computing would probably um be helpful in terms of getting positive attention from our large sponsor to us not that everything's about red hat just that that's a um nicely aligned thing right there yeah well i mean you know appealing speaking surely from a you know pragmatic point of view yes i do work for red hat i don't know anything more than what we're talking about is you know if you want to get funding for something appeal to a sponsor that has money yeah um going completely the other direction a thing i think is interesting is um the like was a micro torch that that peter almanso was talking about which is basically building the models that will then run on like esp 32 or a really tiny microprocessor and i think like that having fedora be an interesting development environment for that would be cool because i i can see cases where i would like like i have a i would like to be able to recognize whether it's a cat or a human going up and down the stairs at night and not turn the light on for the cat because the cat doesn't eat it but the humans do and that's like a that would be kind of a fun little project that probably could fit in that um with some sensor data anyways um but also is probably also zero interest to red hat but yeah would be cool i mean there's some other interesting things that that peter brought up on discussion uh on discourse um about you know some of the open cl stuff um and trying to get it to run on new hardware especially some of the ar 64 things um but for i think for the moment like i said i think the immediate focus is rock m because in terms of stuff that we can realistically do in a short period of time that is it you know it'd be great if open cl stuff works in the future for for acceleration it'd be great if we can get nvidia stuff to work it'd be great if we can get intel stuff to work but for in terms of the stuff that is probably um okay license wise with fedora that probably works that we can get done in a reasonable amount of time we're looking at rock m for the acceleration and then probably pie torch on top of that uh about the open cl it should be already supported but i'm not sure how usable that is for like ai and acceleration i was talking specifically about there was a conversation on discourse about an open cl back end um for pie torch which there is there was an experimental one there was some other support but it's not anything that is there were performance issues there were maintenance issues so i open cl aside it was that specific back end for pie torch i'm sorry i didn't elaborate on what i meant well uh i think we only have a couple of minutes left um if no one has yeah i'm gonna pass the microphone over to jeff um as far as hardware for rock m uh is it an extensive list of gpu's that are supported i wish i had a good answer to that question so i will answer it with what i know um amd's officially published documentation on the gpu's that support rock m there are three or four of them all of which are over two thousand dollars um there are other lists i've seen i know you don't only like that that's just their list um i know one of the other guys who's working on packaging had just got a 7600 uh i think it was a set one of the the lowest end lowest end of the current gen um amd graphics cards i know you can run on other stuff i don't know what the official things are some stuff works some stuff doesn't so long answer i don't really know i wish there was a better list um but i from what i understand amd fully intends to have it work on the stuff going forward so the seven thousand series uh the current seven thousand series of amd gpu's um and going forward i imagine rock m will work on in addition to what it already works on but especially going forward any other questions comments before we wrap up okay well if uh folks are certainly interested in this please uh you know keep an eye on discourse um feel free to ask questions in matrix um if you ever the other questions or want to talk about other things um we have a couple more days feel free to come talk to me um other than that uh yeah i think that's pretty much it thank you all for showing up and adding input