 Okay, it was a bit scared when I got it and we will be in the great room, but it looks like more people are suited for smaller rooms, so it's fine. So we're talking about RPM and the changes of the last couple of years. Is anyone still packaging packages from 1999? Okay. Let's be real. It's a catchy title, but we're not really going back there. But for those who still remember, 1999, RPM was still version three. It was still named Red Hat Package Manager, it got renamed basically a year after. And the hot new distribution was Red Hat Linux 6, and I was learning my first lines of Python by debugging Anaconda and trying to make kickstart work on pre-existing partitions. So it's been a while. And it even pre-dates well, which comes later. You can only get enterprise support for Red Hat Linux at this point. Anyone packaging back then yet? Already? One? Two? Yeah, a few. So to be honest, I only got into RPM development like a decade later. But we did package back then. So the reason I'm doing this talk is it's very difficult to actually get a grip on how things change in RPM. And the reason is there are different kind of changes that happen, and many of them happen over a relatively long time. Yes, from time to time, there's this new shiny feature that pops up and is done within a few weeks or a month and then released. But often there are huge long arcs of development where things come together piece by piece by piece. And so even if you read the release notes, it's hard to get the full story of what's really happening. So I've been going through all those release notes, which is pretty long. Who's reading those? Hey, there are people that actually read them. So how do those feel? Is this entertaining? So there are different opinions on this. But they're pretty long actually. But even reading the last release notes often doesn't give the full picture of what's happening. And one of the reasons is many features can only be done step by step by step. So you need something changed here, then something changed here. And so the change propagates through the code. And there are other things that are more themes like where you fix something and you need to fix it everywhere will come to those. And another reason why many things take a long time is because we don't want to break anything, especially not all those spec files that are out there, which are a few. And so often a feature gets introduced, then it gets turned into a warning, then it's letting sit there a while. And at some point later, it's turned into an error or into a more stern warning first. So many things, even if they're easy to implement, they have a long life cycle to actually be implemented. And even if they're enabled, it often takes a long time for packages to pick it, picking them up. So implementing a new shiny feature is not worth anything unless it's actually makes it way out into distributions and being used. There are a couple of examples I will just not talk about today. That takes years to get anywhere. Even if it's a small feature, it's only like a handful of lines of code. One of these things that have been going on throughout the last year is tidying the screws. So there are a lot of things in RPM which were not checked that well. So people could do something in their spec files that they really shouldn't. Like using weird, weird non-printable characters in provide names, like having syntax errors in macros and stuff like this. And RPM was really not that great in checking those. And so there's a continuous stream of little fixes that tightens this down and makes things into a warning or tell similar error. There's like half a dozen macros now that allows people to basically go back to the old world for some things like checking for empty file lists and stuff like this. But a lot of things we have done mostly unnoticed where we just said, well, that's an error now. So it's worth to take a look until your build logs from time to time to see what's actually has been switched on over the years. One of the bigger thing with this was probably the encoding change. So traditionally RPM allows basically random bytes for many things like names and whatever. And this has always been an issue especially with YAM, which tries to show this within Python and to turn that into a Unicode string. And so there have been multiple steps to actually force people to use UTF-A only. And a couple of years ago we just made that into warnings whenever you had some data within the header, which means any string data in RPM basically. And there's also a macro that we added back then and that just got switched to one, I think. So it's the default now, Panu probably knows more precisely in Fedora. So it's default now. So if whenever you put something in there that's not UTF-8, you actually get an error. One thing why this took so long is what to do actually when you can do that. Of course, you don't want to break packages that are already built, that do already have non-UTF-8 stuff in them. And it was an issue just recently. There's now a Unicode decoding where you can encode decoding. Actually, where you can have arbitrary data that's decoded into Unicode. And so we basically have bytes in Unicode now. So if something's messed up, it gets translated in this bytes character range. And you can decode it back to UTF-8, UTF-8. So it's the same bytes again, but it's just got recently added into Python 3. And so that's what we're doing now and would have been helpful much earlier. But often you're bound with the environment and you can't just do it. Another large story that I will go on briefly is a large file support is probably not that interesting for most of, I hope for most of the packages unless you're going to package anyone packaging files more than four gigabyte. Okay, hi, hi Facebook crowd. So that's also something that took like, I think, three or four releases to actually get that because you first realize, well, all those integers should actually be unsigned. And then you realize, yeah, but there are not. So you go through all the codes and change all the types. Then you say, well, and there should actually not be 32-bit. There should be 34-bit. And then you change the codes, the types again. Then you add the API to actually hand it around. Then you add new tags to actually be able to show this to a user to take a long story short. So if you're doing, so you can just package large files now. For as a package, you basically have to do nothing. But you have to do something if you're using RPM or DNF for YAM as a tool, these are the new tags. They are automatically generated even for files that don't have large files. So please use those. Otherwise, you will run into problems as soon as you encounter packages with large files. And there are people who actually create them for some reason. Another thing that was necessary was creating new payload format for that. So RPM uses CPIO. So I was talking to CPIO upstream and said, you know, it's current year. It would be really cool if CPIO would support large files. What about like extending the size field, have a new format? There are multiple formats in CPIO already, so it's not a big deal. And he said, who are you? Aren't you one of those RPM guys? Aren't you the only one who's still using CPIO? Go away. He said, okay, okay, okay. We will just throw all the sizes out of the format and use the metadata we have in the header anyway. But as a result, RPM to CPIO no longer works for large files because there's no CPIO format that actually can have large files. So if there's a new tool that creates taboids, but if you run into problems, that's the reason why. Now we come to things that actually affect the spec files in a meaningful way. So in 4.11, we added auto set up in AutoPatch, which basically removes the need to have another line for applying the patches that's basically based on putting, making the sources and the patches available in Lua. And then you have some macros that use that to actually apply the patches. Anyone not using this right now? Who has still like patch lines with numbers in just spec files? Look into this. We did it in 4.11, so it's quite a while ago. So you people have no excuses. But a few months ago, we were looking at this again and just thought, well, those actually patches that we added in the spec files still are pretty annoying. They create all those merge conflicts whenever you try to use this kit, it would be much easier if you wouldn't need to change the numbers all the time. And then someone downloaded us that we are actually using computers. Computers can count up. So if you have a number and then add one, you get another number that's one bigger. And so it took us like 20 years to figure it out. And so in the last release, you can just use patch without a number and the RPM is counting up. And after we finished that, we thought, yeah, that's pretty nice and pretty neat and you don't need a number, but what is this patch thing doing in front? Why do we need that? And we realized, yeah, we don't need it. You can just have a section where you just put the spec files in. So sometimes it's hard to come up with the easy solutions. I don't know. If anyone has an idea how to be better with this, I have no clue. But it's in the current release, so you can actually use it and just put them directly in. And we hope that we'll simplify working with this kit a lot. Then another huge arc of scriptlets. When I started with RPM, I was doing a lot of work on performance tuning. There were a lot of bottlenecks in RPM that weren't working that great. I won't go into the gory details, but after we've done quite some work, we were doing benchmark and it turns out, yeah, RPM is not that slow now. But all those scriptlets take forever. I mean, back then basically every package spawned at least one scriptlet. And so if you have an update with a couple of hundred packages and many of them are small, so spawning in a new shell, doing some weird stuff, updating things was a problem. So there's a huge effort getting rid of scriptlets. Who has still scriptlets in their packages? Yeah, it's hard to get rid of them, but we got rid of a lot of them. So please, if you can, avoid them. Try to do things at build time. Try to do it in startup scripts. So you at least don't slow down the updates. And for actually cleaning most of the complicated stuff off, we introduced file triggers. I don't even, I think it was done in 413-ish. It was also something that took quite a while to get together right. So that's at least the second attempt to get this done. And it's basically, it's pretty simple. You basically specify a pattern of files and say, well, run a script with those files. Those files get passed in by standard N. And so you can do stuff like updating all your caches, all your catalogs, whatever. This has the huge benefit that scripts are no longer copied over in all packages, basically. We had LD cache in all, basically all packages and stuff like this. And they are mostly gone in Fedora as far as I know. Unless someone tells me I'm wrong. So if you have use cases where you have a lot of package, a lot of files that need treatment, but you're dispersed over a large number of packages, you can use this to actually centralize this code. So we only have one script and don't have to copy that all over. Also, you can run it post transcription, a post transaction, which we also have a regular script, let now. So you can run the script once after you did the installation and not after every package and it helps a lot. Anyone questions to this? Because that's kind of important. So if you have strips, try to get rid of them. It helps a lot and makes the packages a lot easier. So another feature we added was week dependencies, which is kind of stands out a lot. But I think it's actually not as useful as it looks at first glance. There are like four of them. They come in two varieties, they're the strong ones, which are basically used like the requires that may break. They're weak ones, which are basically left, which are currently ignored by our tooling. The original idea when they were introduced was to have UIs that would say, well, those other packages look interesting too. Maybe you want to install them. But I've not yet seen someone actually making use of that. But they are currently used for choosing the right packages if there are alternatives. So it might make sense to use the weak ones if you have multiple alternatives and your package has an opinion on which it likes most. One thing where they are useful is there's reverse variants so you can attach your package to another package. That's not that interesting for a distribution level, because within the distribution we are all friends, we are all getting along. So it's easier to just do it the other way around. You go to the main package as well. You need to require my package. But if you're a third-party repository or if you build your own packages, that's interesting. So you can have its extension things or plug-ins that you want to attach to a package of the main distribution later on. So they're pretty useful for this. Where it gets more interesting is Boolean dependencies, who has actually used those in one of its packages? There are a few, but not that many. It looks a bit complicated, but it basically says it works like the normal requires and provides. You basically can have a Boolean expression which follows the normal rules. If you want to have both packages required, you just make an end or you can do an or. In theory, you can have as complicated expressions as you want. In practice, mouse cases will not involve more than three packages and four four. And one of the packages is the one where the expression is actually in. So most of them are just two argument expressions. Where they're interesting is whenever you have a package, you want to put in between two packages. So imagine you have multiple back ends for a database, but you don't want to ship those. But you only want to have those back end packages installed if the database is actually there. And you, of course, don't want to require all the databases that you may want. But whenever a database is installed, you want those back end packages. It's also used for fonts and for language packages. So you can have packages represent support for a given language. Then you have maybe some application with split out packages for different languages. And then with those Boolean dependencies, you can basically specify if the application is there and if the language support is installed and also install the language pack. So there's a lot of more automatic stuff in language. In package selection that can be done with that, that cannot be done with normal provides and requires alone. Many of this stuff is actually done automatically so people don't even need to do it by hand. So it's basically made of scripts to put those in. When we had this finished, we were very proud. We were very proud, but Michael Schroeder said, well, there's this one case where you have like a range and you want to make sure that the package that matches both things is actually the same package. And so if you say you need to be bigger than one and smaller than three, so you don't want 0.1 and 5.8 at the same time. And so after a while and people were complaining that stuff like this keep happening, we added another operator which is with and without, which basically requires both packages, all packages that match those subterms to be the same package. Which is still not 100% correct, but it's good enough unless you have all kind of weird provides within this one package. But most cases were just fine. So if you have dependencies on version ranges, the with operator is the go-to thing. And it's actually currently used in, I think, the Rust packaging tooling, which are automatically creating those from the Rust dependencies, which do have those ranged requirements. And RPM could not do that previously. Anyone run into something like this? Or is this pretty fringe, except you're a Rust developer? Okay, in Ruby also, all those modern languages have weird stuff and so I need that. It's fine, it's fine. We get it, it's important. We're happy to help. There's another thing that's probably hidden for most packages and that's the changes that have been done to dependency generators. So there's a new interface since 4.11, where you can basically have a small file that declares what files you're actually interested in. RPM runs lib magic, so basically file, on all files that are shipped with the packages, so we do know their type. And you can basically match on those types, or on the location, or on the file attributes to select the files you're interested in for your dependency generator. And you basically get a script that just gets handed the file names and you can write out whatever dependencies you want by standard out. So it's pretty easy to do. There have been some work. So there's a new Python dependency generator which used to, in the past, only said, well, it's a Python script, we need Python, and now it's actually go through and figure out what modules you need and stuff like this. For us, people have done quite some work. There are a couple more, actually. So I encourage everyone who does a lot of packages in a specific domain to think about this and to come. And if you need help, talk to us. If you can't, simplify your package a lot by adding one of those dependency generators. For now, this cannot be done within the package but has to basically shipped with RPM build or in a separate package that you build require. But it's actually not that complicated to do. You can, it's just an executable. You can use your own language you're working in. So I encourage everyone to look into this because the thing is, we do RPM and maybe we do RPM well but we have actually no clue about all those languages that are out there. And we can't and we won't and even if we could, we wouldn't. So people need to step up and say, well, this is what we need and we are happy to help. It's actually not that complicated. It's a small text file, it's an executable that does whatever you want or need. And you basically just need to write them out. You can also have dependency generators for the week dependencies now. So if you need those, that's also working. Any questions to this? Yes? Yeah, the question was if we are in discussion with the Go package, Go Sick for Go Packaging. We are not really currently but please, please talk to us. As I said, it's not complicated but we are happy to help. I know that the Rust people have done something. There's probably something brewing in the background but the thing is, as it's not that complicated, people don't even have to ask us. It's basically just one executable that you put somewhere. But if there are questions and people need help, please, please talk to us. Another bigger thing that we added, especially for those new languages like Rust and Go, is dynamic build dependencies that will get just added with 4.15 like this summer. And the issue here is those languages come basically pre-packaged. So they already know what build requirements they have. And what requirements they have and everything else. And the issue here is, of course, you can convert such a package description into a spec file. But then what? If you do that all the time, you can't really add patches or you have to patch in the patches to the spec file you just created or stuff like this. It gets ugly pretty quick. And so, there are a lot of things that you can do and so they wanted to be able to basically extract those build requirements during build. And that's what we implemented. My gut feeling is this is something we will, if it takes off, we'll probably improve even one or two more steps. This is basically a first iteration. You get another section in the spec file, which is a script where you can do stuff and print out the build requirements to standard out. So it's very similar to the normal build, to the normal requirements, dependency generators, but it's within the spec file. And the reason for this is currently that this changes a lot how packages are built. So for now, the assumption was if you have a package, you build it and it took, if the package is not broken, the build will succeed. And this basically means we are executing all those scripts and then we decide what we actually need to build a package. So the build basically ends, gives a return, it builds basically a source package that has the build dependencies into it, so you can install those. So it's a bit complicated to get it into the build system, but we've patched MoC and it works now, I hope. But the MoC maintainer is agreeing. What you still need in the spec file is something to notice. You still need to have the tools. You need to get the build requires. You still need to have put them in by hand because otherwise your interpreter won't be there that you want to execute to do the calculation. I think right now it only does one step, but there are already packages, it's fixed already. So I repeat for those who couldn't understand. So we actually fixed MoC, so it now does multiple passes as long as there are new dependencies that are added. So we actually can just print your interpreter as the first line and then enter the script if it's not there, and then it basically pulls itself out of the swamp by its own hair and collects more and more tools and dependencies. So that's something you can do now, and so far it has not yet broken the build system for some reason. I guess in the long run, we might even do something like this fully automatically so you don't even need to do that in the spec file itself. In theory, you could look at the build and so what it does actually, it executes prep first, so you have actually already expanded source code, and in theory, you could run scripts that detect stuff automatically in the background, so you don't even have to put in something in the spec file, but people are kind of nervous with this kind of stuff because if things go wrong, build systems break, or if the build takes too long, so that's something we've not yet planned, but that's like the long-term perspective. I mean, this is one of the features that only came into being like last year, so it's still very early for those five-year development arcs we have in RPM for many things. So that's where I stand with that right now. So there are a couple of more changes I would only want to mention briefly, so there was a rewrite of the debug info that was done by Mark Wielard, mainly so it's something the US package don't really have to care, but now we have split up debug info packages, so if you do debugging, you might run into this, so the layout has been changed, there have been some new links, and things moved around. We have now support for reproducible builds, which are, I mean, reproducible builds is a moving target for many packages, but there's now features in RPM so you can actually set build time and stuff like this. So at least the most basic elements of the package don't change from build to build. We switch to GPT2. There's a huge feature that I think was headed from IBM so I know correctly, which I don't know if anyone uses this. This is basically a feature where you put signatures for all the files on the disk so you can verify that the files you're executing are actually unchanged. So it's one of those NSA things. A feature which is kind of there, but not yet really enabled properly, and still makes us kind of nervous is the features in minimize writes which basically tries to not write files to disk if they are unchanged, which is something that's interesting for SSDs which where, and so you don't want to wear them more than you actually need, it's currently not as much of a speed gate as you still need to unpack the package payload because it's one compressed stream of data so you still need to seek through this. But at least you make your SSD write longer. We are currently working on making this automatic so actually RPM is smart enough to detect SSDs. We are not quite confident to switch this on for normal disks right now. Another small feature that you might not have heard about is this remove path post-vixels macro which I added in between sometimes. It's just like a few lines patch, but what it allows is right now you cannot have conflicting files within a build. So if you want to have two sub-packages with different config files, you can't have them the same name because you only have one tree to pick and choose the files from those trees, which is kind of annoying. And with that you can basically say, well, I have this post-vixels and just cut it off when you package the file. So that might be handy. Not that you should do such thing on a regular basis, but I know people who might need it. So what's next? So we will continue with a lot of small improvements. The development arcs will continue into the future. When it comes to bigger things, I mean, okay, yeah. I mean, we've been standing here for a while promising new databases, but really this year I promise things will happen. So there are two candidates right now. There is the NDB, which is implemented by Michael Schroeder, which is basically similar to the Sol file that Lipsolve uses. It's very refined C code, but there are people who have tested it a lot and are very happy with it. So it looks like it's actually working. It's much more stable than BerkeleyDB, which has a lot of issues, and it was just declared stable, whatever that means. So it's in the upstream velocity already, and it was already in the previous release as like an experimental database backend. Panu is currently working on a backend based on SQLite. Some people that already used RPM back in 1999 may remember that such thing has already existed a couple of years ago. The problem with that was it basically mirrored the BerkeleyDB structure very, very closely. So there are multiple SQLite databases that we are doing lookups, and so the problem with databases is writing data or reading data is cheap. Sinking them to disk is fucking expensive. And so if you have multiple databases, you're waiting on each of them separately, and so things take a while. And so it got ripped out of the code base. I don't even remember. Five fish, seven-ish years ago. But it turns out if you're using SQLite properly, and it actually is performance-wise very similar to BerkeleyDB, so it's good enough for us. And using a database that has actually tools with it so you may be able to recover if something goes wrong is something that makes it worth still pursuing this even if you have another option. We will see how this ends up in the end performance-wise and otherwise, but we'll probably have both backends available. There also is an LMDB backend for a while, but the problem is the LMDB has some limitations that we really don't like, and upstream first promised that they would go away, and then they changed their mind, and so it sits there. And also I hear it's not as good when it comes to reliability as other options, so that's probably going to go away at some point. Another thing that already started with the last release is multi-threaded building. Our kernel people are very, very unhappy with the way RPM behaves. They have all those nice PowerPC machines with a lot of processors, but single processors are not that fast, and then they execute make for the kernel, and it all goes on to the processors, and after the build is done, RPM starts to build a package and compresses the data in one single core, and come back. And so it takes a long time, and there are a couple of scripts within the build that does process files and stuff like that, they were all single-threaded, so we are starting now making them either using parallelism by actually spawning multiple processors or using threading within the RPM binary, and that's going to continue. Yeah, it's its own kind of worms with all kind of things that can go wrong and that go wrong turns out that it doesn't help if you have a lot of processors, if you only have a 32-bit namespace and cannot use all the memory you have in your box, and stuff like this. So another thing I think we will push forward is actually making more parts of the build process automated and put more information centralized. There's something that's going on, that's in large part not going on in RPM upstream, but a lot in the distribution, so a lot of people have been very, very busy writing macros and scripts and whatever for their domain, and I think that's in large part of the future for RPM packaging. So to have like an intermediate kind of layer that keeps, that takes care of the special needs of all those subdomains, like different languages or different kind of packages. And so the call to everyone is to actually think about what needs to be better there and how to gather, to step out of the role of being a single package or that looks at a single package, but to look at the larger picture of what can be done there and if you really need to do like copy and paste lines of spec file around all the time. This is something we already made a lot of progress over the last couple of years, but it's ongoing and that's what I have. Questions? Yes. Not really. The question was if DNF BuildTap will still work with this and I guess not. It will only work for the static ones. The thing is it's a more complicated question, so you are able to build source packages which include those generated build dependencies and if you have one of those, you will get them. But if you are just looking at a spec file without building it, you won't. So that's in the nature of it being dynamic, so you actually have to execute the script at some point to get dependencies. It breaks a lot of assumptions, I know, but I actually didn't want to do it at first. I said, but there are people that are persistent. And it's kind of annoying to do work like this manually if we don't have to. And I think that's the future to do less stuff with an editor. So yes, so what happens is the following. If you have just a spec file, so there are multiple commands now where you can use to either build a source package that's just being packaged. That means you're not extracting the sources and then the file will... So it will provide in a source package that it has dynamic build dependencies, but it doesn't have them in them yet. But you can also build a source package that actually does the calculation. But then it's basically... you have to have the build requires installed so you can unpack the sources and then run the script inside. And then you get a source package that actually has the build requires in them. If you actually just build a package and the build requires are not there, the build will terminate and you will create a source package with the build requires that are needed so you can use that with DNF to install them. Or you use Mock, which does it automatically for you. It's a bit of complicated process, but they're just... I would have loved it to be in a simpler process, but it just can't. Actually, I think it's not that bad with reproducible builds because we already have this problem anyway. If you swap out the packages underneath, you don't get a reproducible build anyway. So to have a reproducible build, you have to have a reproducible environment anyway. And so the script should return the same results. So yes, it makes things more complicated, but it does not necessarily make it more complicated for reproducible builds because you need to nail everything down anyway. Any other questions? Yes. So the question was, what about replacing directories by Simlinx, which is a pain. And I understand it's a pain, and the problem why it's a pain is because it's not a problem that can be really solved. The problem with that is the thing is Simlin can't have files in them, and directory can. So if you replace the directory with the Simlinx, the files that are in the directory right now need to go somewhere. And there's all kind of issues with that. I still have the feeling there are one or two cases that might be possible to handle better, but a couple of people have already looked at that, and it's really hard because so right now what we do is we say, well, there's this script, you do that as a pre-transaction script, and it moves files around, and it's your responsibility to know which file you're moving where and why. And it's like a cop-out, and I'm not that proud of that, but just moving files around without knowing where they end up is also... It's a problem, basically, that we inherit from the way the file system works, and I'm sorry. That's the best answer I got for this. Yes? So I've looked into the change log, and the thing is, it turns out, the problem there is not that much RPM, but the tooling around that. The question is, the problem is not taking the change log out. The problem is where to put it. So you need to put it somewhere else, and it's basically a build system, just get whatever thing. So if there's something RPM can do to make it easier, or help, we are happy to do that. The problem is, as far as I understand, there's nothing that RPM can do on its own to make that happen. So it's something, the distribution, to step up and say, well, we want the change log to work this way, and if there's a way we can help to get it into the spec file easier than whatever other options you have. We are happy to help. So there was a comment to this. So I repeat the question. So the question here was, why don't we just pull the sources of the software we package directly, like from GitHub or from a source code management system, and it would make a lot of problems go away because you don't have to add packet patches. You can basically just use the hash as a safe way to verify where the source code comes from and what's in there, and it also would get a change log from this. And the answer is, yes, you get some things of this, but you don't get those things why we need packaging. The thing why we need patches in RPM is because we want to divert from upstream. And the main, so if you're upstream and you can basically have a branch that says, well, this is Fedora 15, or this is Rel6, that works. Yeah, if we would clone those repositories and build from them, that would probably work. It might, yeah, it can probably be done, but the thing is RPM really doesn't care where you get your sources from. So that's basically mainly, the thing is which makes this whole thing so complicated is because there are many moving part in RPM. It's only like one thing, but most interesting stuff actually happens at the distribution level, and the distribution has to make those decisions. And often there's very little we actually can do in terms of features within RPM that actually makes that even helps there. So, but if there are stuff we can do, we will happily do. But when you come to the change log, the change log in the packages are actually very, very different from the change log of upstream because we are interested in what changes that we actually do. And the change log upstream is typically much, much too big and much too fine-grained for the things we want in packages. But that's like, that's a complicated argument to be made because who wants what, why, where. And RPM as a tool is kind of agnostic to this discussion. You can put whatever you want in your change log as far as we are concerned. But, yeah. It's a discussion that probably will still take a while within Fedora to actually come to a solution there. And I really hope we get there at some point because it's... So the question, so there was first a comment that indeed the spec file and the change log needs to be something configurable so different people have different options. Then there was the question about templating the spec files and if there's any news. There's not really much news about it. My thinking goes now more and more to the point that we probably need to push those scripts and macros that we are seeing currently and things like Ruby and go further out because that's basically the gateway to get this done. And at some point RPM can probably help with a couple of making this even easier than those macros and stuff we are doing right now. But in the end I guess most of the work needs to be done with those scripts that concern themselves with those types of packages. So that's... I'm sure RPM can help there, but it can basically do it. It's something that needs to be done in those separate pockets. And I guess we are getting there so that when you look at the last year or two, there was quite some progress made but not within RPM but in the distribution where you have new dependency generators, new macros to actually do the builds for those languages. We have the dependency at the build dynamic build dependencies which do a lot of work for the packages from the Rust, I forgot what they are called. They have a name. So there's progress there, but it's actually not that much feature in RPM but it's a feature in distribution basically. Thank you very much.