 Right now, so if you are switching rooms or anywhere, but will we get started here in just a few moments? So you all here or continue to be here for the we build on the success of the door and next track Up next here. We are going to do a round table discussion of packaging issues for modern language ecosystems And I'm happy today to present Jens Petersen who is an engineering manager at Red Hat So I'll hand it over to you. Thank you Welcome So actually the idea was that this should be kind of like a round table kind of discussion. I Guess we'll have to see how that works out. So yeah, so this is an interesting talk in a sense that Yeah, it depends a lot on you what work comes out of the talk So I don't have all the questions or all the answers or anything. I'm just I've prepared a few slides to kind of set some context So I hope helps Yeah, so Yeah, also the title is that pretty a little bit off. I mean, it's not The scope isn't just packaging issues or sort of it's more about how things operate and workflows and processes and well packaging issues Right Yeah, so I just wrote down these sort of these are very rough numbers They're not you should take them over grain of salt, but they're just I just don't need to use with this Pagua tool So just roughly the sort of side rough sizes of some of the different Language ecosystems packaged in Fedora And they're probably a bit off I suspect they're probably more Python and probably but anyway just by this I Think some of the Python packages are a bit inconsistently named But anyway, you can see that the other top are like Poland, Python and then there's rust and Ruling and then it's sort of this some PHP Haskell Ruby and So this is fine, I mean, I guess the thing is all these numbers look very big in a sense some of these Accosystems are absolutely huge. So in a sense, we're only capturing a very small Fraction of well, obviously we can't package every single little package in Fedora, but Yeah, there's sort of a Pretty big gap there in some sense But maybe this is good enough on East it's a good starting point I think we'll come back to this issue a little bit later because yeah this But also you notice that there's some pretty glaring on missions and at least for example the JavaScript It's not there. I think we know that yeah So a lot of the node packages disappeared a few years back from Fedora for better or worse Also Java is another very big system, which is almost completely absent. I mean obviously we have Java and Fedora, but Not really packages much But yeah, I'm a bit curious about who's here. So Is anyone particularly involved in any language sigs or to a lesser or greater degree or not not really I'm fairly involved into the goal and one Local goal and issue. Okay, great So I am involved in the rest sig but I used to be more in the past. I kind of Not the last month, but a bit Great, thanks Anyone else I'm not involved in any of the six at least language six, but I do Maintain a number of packages written in different languages and in particular In JavaScript because I maintain a couple of Firefox extensions and I Well, I don't have the time, but I'd love to have a you know Best practices document for packaging JavaScript like we do for Python and Golang and so on so Somebody help me It's I guess I wanted to say I'll just echo that I Am not involved in any of the language sigs But we have an infrasig for infrastructure packages and I maintain I don't know 250 packages or something like that So any of the kind of solutions or two broad tools for languages could very well apply to that sort of thing And that's what I'm hoping for You know just better ways to maintain stuff or handle it Absolutely Since everyone is presenting I'm here since I Recently tried to package my first gobo lang package and realize the complexities And it got sidetracked, so it's on the height, but I'll get back to it Yeah, I'm pretty involved in the Haskell SQL it's a very small city. It's mostly me and one or two people, but All right Let's keep moving because I want to sort of get more into the discussions So, yeah, I'll just note it down a few of the current changes for 4.39 I Think that's pretty more things happening, but these are the changes I could see like pearl and Python I think we're all saw the big Rebuild for 3.12, which was I guess it went reasonably well from my Distant perspective and yeah, there's this Change to remove all the go lang leafs. John. I don't know. Does anyone know anything about that? And yeah, I have a change for Haskell, which is I don't know. I just put this slide in because I thought it's kind of interesting just to think about the priorities of SIGS and how they fit in with the Fedora values like I mean what one thing I really Need in the Fedora in Haskell SIG is more people so like friends and I've struggled with this over time getting We have a real lack of manpower. So like yeah, it's really hard to get Package reviews done even because you know, maybe some of the new ideas like the Review swaps like on this course or someone may help Then there's the first like yeah, I'm nothing like my Fedora. Maybe it was one of the first distros to adopt Python 3.12 for example Yeah, and freedom maybe tighten the licensing and yeah, so All right, does anyone have any comments on this? I Working on compatibility with Python 3.12 and like for three different patches I had replies from upstream But where did you get Python 3.12 and NumPy working with it? It doesn't work with NumPy 3.12 yet Yeah, I mean Interlude I Wanted to sort of probe people what kind of problems they're seeing or what kind of pain points? To think about and so from the goal perspective, I see a very big Process issue or tooling issue Whatever you want to put it because the problem is that we have roughly 2,000 packages for 200 application that we really care about and then 1.8,000 packages just because of how RPM works and how we decided to package RPM packages with the Fedora, which would mean that if we can change that process Or those or change the tooling to allow a different process We can immediately have one-tenth of the packages and therefore way more manpower per package So in our case, I think it's more a process or tooling issue more than a manpower issue So one way to approach this would be to Automatize the the management of packages, right? I keep the packages as a step and Build on top of this. Does this sound feasible? So we do have go to RPM which Really helps in creating RPM packages. There are a couple of issues though the first one is that Even with go to RPM. We don't have a perfect spec file So it's a very good starting point. Don't get me wrong on this, but It's not perfect. So it's not something you can completely automate. You can you should still doing the reviews You well, you should first reread your spec file fix eventual issues do Apply for the review do the review and everything else and the other big problem is that Due to the go and rust have this fact that basically the creators studied binary. So we only use 90% of the packages just for the sources Not for any intermediate things, which means that if we change one source package One of those library packages We don't have any real effect on the binaries that have already been built So we should re-kick builds and builds and builds which we don't do Let's be frank Which means that a lot of times we do have bugs that should be fixed But are not fixed in our binaries because we have not done the rebuild The big problem is that we have some packages that are depend that we have 2000 packages depending on those and If we have a new version of the library of the core library and we kick 2000 packages every few days I don't think cause you would be happy with that But aside from that there is also a discoverability issue and that kind of things as well as conflicting library versions. So a lot of times we end up having multiple Source packages for the same library for different versions Just because we have applications that we care about are the only things that we really care about that are depending on different versions Obviously we can work around this and we have done this for the last I don't know five six years, but It I don't think is a very good way of handling this kind of thing And I think that if we are able to change processes or and all tools Then we can be way more efficient at doing this Yeah, that's interesting Maybe I could This is for a bit special but in Haskell We have a kind of a good situation where we actually have a kind of upstream source distribution called it's called Stackage So what I'm doing in Fedora is basically just pulling down packages from Stackage And so there's a unique so we only ship one version of a library basically And all those packages are kind of supposed to be compatible with each other Well, there are some exceptions. There are some packages in Fedora, which are not in Stackage, but largely that works fairly well But yeah, I also don't know of any other language ecosystems which have such a distribution maybe there are some but Which one? LaTeX, I mean it's not a language, but they do have a big distribution that we can repackage the whole thing So yeah, but I think that The interesting part is that some of those Languages have some kind of issues other have very different kind of issues. So for instance, Python Have issues because obviously there are a billion Python packages but the good part for them is that since everything is on runtime compiled or at least interpreted As long as I Mean, I think the current model kind of works for Python Maybe better tooling can help with a bunch of things. Obviously they do have Problems with supporting multiple version of Python and other kind of things like this, but Others like Go and Ross have very specific issues due to their static Nature rather than the others that have other kind of issues Right, you mentioned the need to rebuild like to you need to rebuild your binaries to get like updated library fixes in Yeah, yeah, so the issue is that In Go for instance, let's say you have a binary that depends on ten libraries and it will be Statically compiled so that basically you have one binary that And that at runtime depends on zero library if not leave C and a couple of very basic Libraries, so that means that those libraries even though maybe in RPM packages are Splitted in different packages because we package every library in a different RPM package In the reality in the binary RPM packages We only have one binary file in the leaf package All the other packages are there just to make Koji or the builder happy is they are not there for the user right, but Yeah I mean like you can sort of go to the extreme like the rest has done in Fedora where they only ship source In a sense source all the libraries are only available sources, and then you can build something using those sources I mean it's some yeah, maybe it's it's pragmatic. I guess I don't know I I can't say I like it, but We might my dream is that users should actually be using these packages, but maybe it's unrealistic dream. I don't know these days Yeah So a quick comment on that that for users those packages are completely useless. They are the only useful for building other packages So I mean I it's a complex problem, but I Wanted to make a comment earlier about the the stuff that you mentioned that The initially generated spec file needs adjustments and For us I think we are very close to to having like 99% of packages generated either Ideally or generated in a way where did Changes made by the package after the fact can be propagated and I mean either for for an explicit patch or for metadata that gets applied when the generation happens So it's like the free you apply some switches when generating the spec file and those switches are safe to a config file And then this is in this gate and when you regenerate the spec file from scratch You don't do anything you and I think that this is a good model because this allows automation to happen and I can kind of imagine a situation where If this is automated and this can happen fully automatically we could for example have I know poor request in in this gate in pleasure that do the whole thing and Then it's a small step to out of my or to allow anybody to do rebuilds in some Fashion that doesn't require proven package or privileges because I think that part of the problem is that we have a I mean doing stuff in for the right. You're not a proven package. Er In those ecosystems is just impossible in other ecosystems. It's okay, but there you just need to write to a hundred packages at any given time and Yeah, and we could solve this I mean, I'm not sure if this is the solution but we could Adjust our permission models to allow this so the way we solve this Permission problem was with creating the goal and sick assigning but we have also amended well proposed and then it got accepted a rule so that every goal and package has to have the goal and sick as Committers, so we are working around this but the my frustration with this is that as you are saying All those source packages have zero value for the users Between rust and go we are shipping. I don't know three four thousand packages that have zero value for the user We are just cluttering repos the Metadata of the repos everything else Just because we want to apply a process that does not fit for these kind of things or like the the permission the the goal and sick permission on everything which basically is a Proven package because if you have access to I don't know 10% of the whole repository is basically proven package Level which by the way is granted just adding a comment being a package and adding a comment into a Ticket You get straight away Access to a couple thousand packages which first not ideal either but all those are work around Because the system does not apply for those packages So that's why I'm saying that I think that we should try to think Also a different process that would apply for those languages because at the moment is go and frost but I foresee in the future many other languages having very similar issues Because I think that if we decide ways or if we have tools to introspect packages So that we can introspect packages after the fact In a way that it's We don't have to have all those those source packages, but only the leaf ones Then everything becomes easier and another issue that I see is that let's say a new contributor wants to package You know whatever Interesting tool they are using They might discover that they need to package 50 packages and have 50 Reviews which then becomes a huge burden on the reviewers. Obviously, it's way easier to review Goal and things then maybe Other kind of packages, but still it's a lot of work just because we want to apply a process that does not apply So I think that we should Really think through the process and see if we can Just do binaries with vendor stuff basically because that would be I think the Optimal situation and then have tools to then be able to discover those vendor Libraries and then do for instance free deals and that kind of things based on those metadata so I Think that in the case of the dependencies that you mentioned there's Two parts to the to the review of the dependencies right one is like the the mechanistic packaging of dependencies so that they get Dropped into the build route so that you can then use them and I Think it's like this most visible part but there is also the the review of licensing and I know just a general review of the stuff that happens and So the second part is actually useful and I think that we want to keep it The first part is just a technical detail that we could get rid of so so I think that the question needs to be how can we keep The The quality control over over the dependencies that we have right now without This extra process that is complicating life life for people and I mean, I think that we shouldn't concentrate on the like on the you know package part because with automation this could be Simplified quite a bit like I can imagine a script where you're like you're in the go lang sig and you you process some some Do some invocation and just 50 different packages in a way that you can review and push at once and we could make this happen. I Think it would be important to figure out how to how do we deal with the The licensing issues and the introspection of the dependencies if we change the process So at least in go Defining and understanding the license. It's deterministic in a sense that for instance the Go lang documentation gives you the license of every package you look the documentation for so effectively there are ways to Extract these kind of information and that's I totally agree with your point. There is value in the process But I think that we can also get the value outside the process So for instance, let's say that we add one step into the CICD pipeline of go lang packages adding one step that checks all the dependencies of all the The libraries that gets rendered in and if those are within a certain list of Acceptable licenses then it gets shipped. Otherwise it gets blocked. I'm thinking something like this And it can be I guess that in the rust you do have ways to discover a license Because I guess that your rust to RPM or whatever is called also does the same thing as well so Effectively you could have a different step From the go lang one because obviously you would have a different way to discover the license, but still Be able to use the same idea behind it And But with different steps for different languages Yeah, but I think I can't agree with so many of it Having purely automating the licensing is a little license checking is a bit Tricky, I don't know. I don't know. Maybe it's yeah, I mean I agree with you for many purposes maybe it would work but But there are often cases where there are mistakes in packages like the wrong license tag has been put in a package or Things like that. So Yeah, it's a little bit. I'm thin ice. I think if it's completely automated, but I don't know. Yeah, I mean it might be one Something that could be explode Could you disable the screen lock? Yeah, I just wanted to add a couple of it. I mean I agree that in some sense all these Vibrates packages are kind of Useless, but I mean in a sense not what users don't care about it But I still makes me sad in a way because I feel like as a distro we should be providing Binaries so there's actually a lot of wasted. I mean in terms of Yeah, like in terms of I mean Global warming and so on there's so much wastage of rebuilding and rebuilding and rebuilding binaries So for me, I mean there's things like nicks and so on and cushics where I mean there are caches of binaries and so on so I don't know. I feel ideally we should actually be making those Bineries useful so users would use them. I mean that would be the ideal Maybe it's ambitious or we're on rustic. I don't know, but that would be my desire Or actually had meaningful binaries that users could use Yeah Okay, if I build something in Haskell using the libraries which are packaged in Haskell it I can build something really fast Whereas if I build it with Like the upstream tools or whatever then it takes much longer to build them So yeah You look puzzled Why doesn't it apply? It does not apply to go at least and I believe us rust is the same way Because due to how the go compiler works it will always try to compile from sources. You cannot pre-shape pre-build Artifacts that it will be used it will always start from all the sources of all of your application and all dependencies and Dependencies of dependencies and so forth and so on and because the doing the big bank Compilation it will do optimization throughout Code parts and that can things and exclude all the part of the libraries that will not be hated by your application So on so effectively due to how the compiler works What you are describing does not apply to go now we can argue on the dynamic of go and The compiler itself, but that is how the language works. So other we fork the go Compiler, which I don't think we want to do or we accept that Go does not work that way. I don't know. Yeah, I'm not so experienced with go or rust So even rust doesn't cash Builds locally for like no, it's it's this exactly did like this You have you have sources and you you build from scratch doing optimization of the whole thing at once Essentially link time Component I'm a link time optimization So again in Haskell the there's two tools cabal and stack and both of them cash Basically cash bill cash So if you build some library and then you build it again, it will it will use the same Bineries like to link two separate Packages Anyway Yeah, well, this is kind of what we've already discussing in some sense, but There's both there's various issues about Okay, this was also a different slide, but yeah I mean my most users would tend to like often like use upstream upstream like that might use the upstream binaries even for rust or cargo and so on So, I don't know I kind of say this as well, I don't know I mean that's I know From a district point of view it seems a bit problematic. I Mean it's sort of the next logical step If you want I mean if you resigned to not providing binaries, then why even why even bother using the district compiler them? So, I don't know. I still feel it's sort of slippery slope in some sense Yeah Because we want to have some things packaged in fedora. So maybe we have to do the minimal work or what string well stream I agree with you completely that streamlining the processes and maybe If we could like even get it down to just like a license check or something like that more or so That would be great. Yeah, because the moment is still quite a big hurdle to get new packages in Any thoughts on this so I Mean if users really prefer the upstream binaries, then this is probably because they were better for the users and I Think that if we are providing binaries, which are, you know, like We think that they're good, but actually they don't Have the I don't know like for example, they don't have certain features enabled because we haven't packaged some dependency then It's it's not there. I mean, it's doesn't benefit anybody. It's the users are getting the worst experience if they use the package I mean for me I When I can use a package, it's great because first of all, I have a level delivery method second, I have a clean up method and I get updates and I Mean like yes, it makes sense to do packages when the packages are at least as good as the The upstream stuff. So in particular for us If we do the whole process correctly The the package the code delivered by distribution is going to be Exactly the same as the upstream one right because it's the same compiler the same sources It's it's a bit different in like traditional Systems where I can see where you have compilation flags and link flags and maybe some patches and a different version of the compiler and this all means that if by the end when you get to a End user Program that links to 200 packages the way that you build each of those packages matters And then the result can be quite quite different here for I think that In particular for us, do you just end up with something that's maybe not binary Identical but functionally should be exactly the same That's Thing I haven't written it like what I meant is that people are using the upstream toolchain not not not the not and not the end packages Yeah, but but so you're using the upstream toolchain and like so let's say that we are a user and you get you get a New fedora and then you like okay now I want to use this program I have to install cargo and do cargo compile this and then a week later I have to remember to update it then this is a terrible user experience, right? The good user experience is that what every few days you click update and then everything updates and you want to have the same thing for me, this is the value provided by the distribution and I think that that's what we should try to deliver right like Stuff compiled in the well in the way that absolute compile it or maybe slightly better Just nicely delivered as packages so that you get automatism We should not miss a distinction in between your go and rust and for example other systems where we have the where you include things Like Python where people do a root pip install of something and break DNF is The prime example of where it goes wrong because you actually use the dependencies on the system and can break One program because you want to install another program, which is not the case when you do it static linking so It's a bit it gives bit different aspects of this particular problem. That's my But I think it's actually a good example because we had this issue that Users were doing pip install and breaking their systems and we actually fix it at the root, right? because now pip install does not break the the system and We change the way that we do things so that it's nicer for the users to use the upstream Packaging if they want to And I think that we need to do the same in other cases Yeah, I totally agree and if we pick go for instance the Fedora 39 change Where we dropped a bunch of leaf packages those were packages The master tag goal and leaves basically those are where all the packages that were source packages And we're not strictly required to compile our binaries Basically, we are saying we don't care about the users because the users will not care about all those packages The way we the reason why we have two thousand packages go along packages within Fedora is just to have 200 binaries That's the only thing we care. We care about Kubernetes. We care about the tcd go pass or the others We do not care about goal and dash Google dash X dash Cis because the reality is that the goal and rust is the same is like those new languages Are thought in a very different ways than C and the others were thought where basically it was like Oh, we now have a compiler We now have a standard library. We now have stuff. Let's give to the user the ownership of putting everything together and in that word Distributions were great because they saw the issue for the user We were able as a distribution to help the user in those new tools were basically the compiler also downloads all the dependencies automatically and Compiles them for you is I the distribution has no space there, right? So we either Work with them and change the way they work or we adopt to and accept the fact that users will not care about those packages What you said, of course is true But I would not agree that We shouldn't care About the all the library packages that are our dependencies or well that are dependencies for the Few hundred packages that we actually care for about Because that's what people use I still think there is There are some there's some added value that the distribution can give for example You know when With automation this this this becomes tricky. This should be some gating still Yeah, that's that's one of the things that we've got gating. So any updates that break other stuff should get caught okay, but you know When there is an upstream update and Actually, I'm not that familiar with the Colo encore rust ecosystems, but I know there is you can pin dependencies on to a particular version But is does it always happen or can you just say I? depend on version 1.5 Up to whatever but not Newer than 2.0 for that right? Yeah, you can specify them exactly but also with the range, right? Yeah, sorry, so in go You have the specific version pinned like 1.7.4 dash 1 and then We do a little bit of Trickery to make it work with slightly different versions Otherwise everything would break But due to the way it would work upstream outside Fedora it would be with very specific version spent Yeah, so so what we're doing in Fedora is actually a bit different than what upstream is doing but actually upstream benefits from what we are doing because I Think they do because they they know that if we encounter problems they they know that Things will break when they do an update right so they they need to react I Know you want to go into a reply, but another benefit is that Yeah Okay, I'll let you out Thank you. So yes and no first because Due to how the Compilation works upstream. Let's say who go for instance who goes written go they deliver a binary They will only upstream will only support issues on their binary and If you are go there and say oh, I have this specific issue and they are like well It does not apply to the binary with the right version of that library So if you are on a wrong library, that's your problem and they do have all their CI CD for exactly those very tight versions and We are trained to lose in the system Where it's very tight upstream and everything works. We lose it We break stuff and it's our problem now and the second issue is that often we are not a Working we don't have libraries that are version higher than upstream upstream at least in goal A lot of upstreams simply one time every week. They update every single library They have so they are way more bleeding edge than we are so we are just lagging behind We are doing a huge amount of work for I would argue very limited benefits if it was zero cost Okay, yeah, whatever who cares but since it has an impact It has a cost it has a tall in tons of hours of Contributors that are gets then demotivated from this. Do we really want to have this? Okay, so you're obviously working with different uptrends than I am because well, I maintain a very limited set of goal on packages but When I get notifications from our release monitoring dot org And I check what the changes were I Don't always see, you know, the the dependencies Updated they they're usually pinned to whatever version they were at When they were added for a very long time so I think That's probably the disconnect between what you're seeing and what I am seeing and what what I think Fedora value is here. So Maybe I'll continue a bit and we can see there's only 10 minutes left There's a few of the topics I wanted to I don't know in the context of our current discussion, I'm not sure how well it fits in but when one is about package workflow We thought a bit about the the high Barrier to entry of packages So I'm wondering what would it take to allow to sort of really streamline this process of Introducing new dependencies. I don't know what because I would require some significant changes to Package review process or I mean, maybe it should be done in a sort of six specific way because I don't know I don't think we I'm not sure we can open up the floodgates to any package just being kind of just coming into Fedora just on the basis of license so Maybe six would have to be involved in some kind of process around I Think that Surely we can delegate stuff to six that could be an idea Though there are many six that will have the same issue. So Personally The way I think it would be best it would be that if the Fedora project says look we have Analyzed multiple options and we have seen that there are three possible Let's say three or one. I mean two three four five whatever number of possible models that can apply You seek can choose which one of those flows better fits your model So that we don't have 20 different six that everyone does different things or slightly different things But still we do have a little bit more freedom on a sick perspective Another thing that I think we should really fix is that a sick should be able to be the owner of the package Not individual contributor not only individual contributors So I think that Essentially, we are using the Metadata as a the list of allowed dependencies for for for packages that are compiled From source including all dependencies right so so for us and go long and I think that We could switch to a model where the same information is kept in a different way I think it would require like a discussion of How to do it, but essentially I can imagine some model where we have a list of We don't actually package the dependencies we just say okay well, you have the dependency for and we just allow I don't know either all versions of for or versions of for between this and that and At compilation time The package that specifies that it wants for was a specific version on a specific range gets some some Version of the dependency delivered and the compilation happens in exactly the same way it happens right now because Right you get some version of the dependency delivered and you compare that so If we do this there are some different mechanism then through the through this get then I think that Many things will become simpler in in particular like pruning obsolete packages could mean that we don't prune stuff They just stop being used and they don't bother anybody because there is That's one thing and This also solves the problem of Different packages requiring slightly different versions or even not slightly but majorly different versions of the dependencies it's I mean you really simplify the life of those ecosystems if you could just Use what the upstream says by default maybe allow overriding this Very good ideas, you know, I like I like the way you are going with this Yeah, I mean if we really can have something of this in the future that would be pretty exciting and that would open a lot of possibilities I think I Think we're running a bit short on time, but another topic I sort of want to touch. Well, I don't know I'm not sure if this is a good topic but about rpm macros and It's pretty hard to change rpm now because it's so Opicals and Operating system, but I also feel that like the rpm micro language is pretty awful in many ways, but I guess it's sort of Worst is better I kind of wish I was a more modern declarative language, which yeah, but I guess I'm dreaming but Maybe it's not really a well, I don't know. I don't really need this to to move forward. I think the What we're just talking about now is pretty the biggest Problem that needs to be solved The things about like tooling and automation Automation Yeah, I was sort of hoping we could have some knowledge sharing of different tooling and automation around packaging but We're also running short of time. I don't know anyone has anything they want to yeah, I Know some I think going it is using dynamic building requires Which is interesting Yes, we are using both spec generator dynamic builder twice But I really feel like we are trying to patch Something just to make it work Dynamic builder requires like I'm not saying that invalidate everything because they don't but It's such a work around Around the process that is very static is like the rpm process is very static by nature and To make it kind of workable then we put dynamic stuff into it so that it becomes kind of acceptable and is a Yes, it's true, but we have changed the nature of that process itself. So at this point, I Think that we should really think about What was proposed as different this git builds or That kind of thing so that basically it just flows the part that we really care about Yeah, it does feel like a bit of a hack. I don't know. But yeah, I'm not as it works, but Plugs here, but Yeah The other thing I noticed is like Misalignment or between like the upstream and distro. Well, I guess we've sort of talked about it a bit, but for example, I Think a Haskell is surprisingly well matched I may be because some of the packaging sort of disturbed that packaging people were involved in the packaging system design originally So it kind of maps pretty well, whereas, I don't know maybe some other languages. It seems more tricky Maybe Python is almost the worst in some ways. I don't know. I'm not sure Yeah Anyway, we should probably start wrapping up Yeah, I had a few other notes I'm right here, but I Think someone brought up this idea about cascading rebuilds automatic rebuilding I guess yeah, I think that's what Nick sort of does more or less but Another thing is that I'm seeing a lot of new languages which almost can't be packaged because they use such weird packaging It's a real problem. I feel like it seems like a lot of new new projects don't really have this idea of having Being packaged into a distro is like an afterthought or something Which makes it a bit sad as to but Yeah Yeah, one of the things that's interesting. I think is this cross distro collaboration I mean we're we actually had in Haskell. We had some collaboration with the open Sousa, which has been useful actually We used to share a bit more touring now. They've slightly diverged. They're actually more bleeding edge Yeah, so I think that's more or less what I was going to cover, but if anyone has any last Ideas or thoughts or other things that we should think about in the future So I think that I mean We didn't it has this at all, but I think that we need to reinvigorate the packaging guidelines and package our documentation on the wiki and in the dogs because And there's some parts that are being updated regularly, but many parts are just full of Obsolite stuff you had a branch on one of the previous slides and like if you if you're a new package I mean, how would you find out about those tools, right? And I'm not sure why this has happened, but I don't know like we should really Put work into into updating the dogs to just have the current stuff and get rid of the old stuff Or put it on on the side somewhere where it doesn't confuse new packages Hello there, can I just add something on that a good way to get engagement in the community on that if you can spot problems that need To be updated. Can you create a ticket and add it as a good first bug or something? It might encourage people to that don't know anything about this to jump in and try fix it Well, I I mean you don't need to do that You open any page and you start reading it and then you see like, okay, this is formatted incorrectly This is I mean like if you if you do packaging you on essentially on any page you will have stuff that is I Mean like I could open. I don't know a hundred tickets if I wanted to I don't think it would make sense I mean, I would just overwhelm the pipeline also, I do believe that The rpm packaging guide is actually approved by fesco for the changes So I'm not entirely sure that that would be a good first change For someone. I mean if it's like a documentation That is easy then to to get your change merged and okay, but if there's stuff that fesco, then I have to wait in FPC, okay, but still All right. Yeah, well, thank you much. It was a good good discussion enjoyed the session. Thanks for coming