 All right, good morning, good afternoon for me, at least. Yeah, so I was asked to give an overview or some kind of talk about the Gentoo Previx project that I've been working on for quite some time. It is a bit much like the previous speaker, but more from a Gentoo's perspective, which is a Linux distribution. All right, so something about me. Yeah, like I said, I've been involved with Gentoo for quite some time. These numbers actually tell me I'm getting old. Anyway, in a very brief nutshell, I have some origins in research in the Netherlands at CWI, Center for Risk and Informatica, and I had a need there, of course, to try and get software compiling on the systems that they had. And I also had a personal need, that was that I really like to use Vim and Mud, which is all very much console-style, but Mud is very often not even installed. It's an email client and Vim is usually out of date. So that is a bit why this is so interesting to me. So it has been both professionally in the past and it's always been personal. Just a small note, my current employer has nothing to do with Gentoo Prefix or anything that I do for Gentoo Prefix. They think it's fine, but in no way they have anything to do with it, which I just wanted to make sure that everybody understands. This is purely Gentoo and me as a voluntarily, volunteer Gentoo developer. So yeah, a bit more, why did I start on this? Well, why in short? In the past, and these days it's getting much better, but in the past, if you were working on Solaris 9 or Solaris 10 or you were working on macOS from the early 10 days, you would find that your environment that you're working on is pretty much out of date. And if you're working into a cluster that was set up by whatever IT department they used, whatever, in those times, still Red Hat Linux of a version that was at least a couple of years old, there's also no way of changing this, right? Grading your Solaris install is not just a possibility. So you're basically stuck with whatever there is and that's either for political reasons or just because the hardware is old, but you do want to use more recent, more decent development tools. And I've always been very interested in development, getting yourself development tools like compilers, but also the utilities like set and stuff like that. And most of these things you can just compile even on, let's say outdated operating systems or hardware, even today on very old hardware, on very old configurations, you can still build a recent GCC for instance. And that was interesting because you sort of prolong them the life of whatever hardware you had to buy or whatever hardware you had available. And one of the prime examples there was, yeah, that was my first Mac I ever bought, which was a Mac book, I think, or it was called the Power Mac or PowerBook or whatever it was called. Anyway, something extremely old, having something like 500 megahertz. And yes, we were running Jaguar, which is someone at that time actually made a Gen2 logo that was using the leopard and Jaguar skin, which is here on the slide. But what we did with Gen2 for Mac OS X was actually installing software right into the whole system. So we just put it next in USR bin next to the other tools. And it was cool for everything that was missing, but of course it was extremely annoying when there was already something there like VIN that it was outdated. So you could not replace it. Yeah, you could do that, but then you would damage your system somehow. For Gen2 as a whole, Gen2 for Mac OS was also not so nice because there were all kind of exceptions that had to apply for the Mac OS port and that is what is really, well, people don't like that. So in Gen2, there's some sort of shell scripts that define how you're going to build software. And you add a lot of if cases basically, so that wasn't really working very well. So very quickly we came to whatever this project is, Gen2 prefix, which basically was the decision to say, you know what, instead of trying to install into the root file system or something, we just dedicated directory somewhere in an offset as we call it and that directory can be in your home directory, can be in your far attempt, can be on some scratch or on an NFS anywhere where you want it to be. Because it isn't directly anywhere, we don't need anything like root access either. So you can even do this on the cluster of your university or wherever you have or on the desktop that you got provided from your employer where you don't have root access. And because you install everything in that clean slate, basically you can replace anything that you want as long as it is possible. So you can provide build tools, you can provide compilers, you can actually provide a complete tool chain including a linker, which means that you actually can work around all of problems. Early editions of MacOS, for instance, had a compiler that was full of bugs and just providing a fixed compiler basically made life considerably better when you try to install software that did not want to take into account a broken or weird compiler. But yeah, so this was not limited at all to MacOS, this was just like, okay, we put everything into a different location. And that is basically the approach that you see more people doing with like homebrew and Mac ports, they all use some sort of offset. But what is, I think what makes gentle prefix different or considerably different from most of the other approaches is that gentle prefix is very much focused on providing its own tool chain. And a tool chain means a compiler and a linker and everything around it. And it's not so hard or relatively speaking, it's not so hard to build a compiler on certain platforms, it is actually very tricky to build something like a linker. But that's not where it is. The thing is that if you build just a compiler, it doesn't mean that you certainly have something that really masks that you're working into some offset location, that you're shadowing tools that are available on the system or something. So what do we do? And we do that from day one because that is really something that's very important to us. We configure the compiler in a linker in a very special way. So we support basically a couple of compilers and we try to do exactly the same for all of those. We definitely support the GNU bin details which used on the health platforms and we actually work with the Apple CC tools which provides you the linker and assembly. And we pass them in the same way or we use wrapper packages. So to just give you an indication of why we want to do that. This is, yeah, I do a lot of C programming. So this is for me, we are compiling something. You're compiling a C file. You want that to produce the executable my app but you actually use something like a library, like LibYaml and well, okay. So you compile this, then you need to add that you link against LibYaml after you install LibYaml, you can use a native package manager or whatever. And this is the normal way of how you do those kind of things, right? You install the dependency and then you tell the compiler, okay, you can use this library and then you get basically your application. This can work, yeah? That is like this is just working because everything is right. This can fail because it cannot find the required include file because well, either you did not install it or maybe it was not installed in a place where the compiler knows where to look. It can also fail because the linker is actually looking for the YAML library and it cannot find it for the same reason with installed in a different place where it was not installed at all. And the worst case, which is usually also the most nasty one to resolve, right? Is that it actually fails to compile because it thinks there's something different which is usually what's happening when you have different versions of software that then sees different versions and it doesn't compile anymore. So yeah, what you can do is you can of course manually install then your library, let's call it myYAML and then you tell explicitly to your compiler, hey, here you can find where you can have the headers for myYAML and here you can find the library for myYAML thing and then you link against it and that works fine, right? Because then the compiler can find everything and you explicitly told it where everything is. Yet if you run the executable, suddenly you get a trap because it cannot find the shared library that it needs to execute this because it is not in a standard location so it doesn't know where it needs to find it. If you Google for that then you can easily solve that set LD library path and that works until things get a bit complex because then suddenly it starts to break things. So the real solution for this is actually to use things that are called run path or R path instructions that you can put in ELF in ELF binaries and libraries. You can also add this path where you install the libYAML library to your LD so confirm Linux, then you need to have room access. So you can solve this, but it's all manual. And that's exactly what the prefix tool chain tries to avoid. So what it does, it is actually modifying the compiler to by default look into this prefix, the offset, the specialized offset that we created this directory to look in there for the includes such that everything that we install in that prefix we've automatically seen just as if it were installed in the main system. It actually tells the linker in whatever way is necessary to look also inside this offset. So it actually finds the libraries that we installed in there. And in accordance to that, it also adds these instructions such that when you have to build binary that the build binary itself can execute without any help, without any changes or environment settings or something like that. It can do that through these R-claws expressions, for instance. This is all transparent. So the software that you're building doesn't see this happening. It doesn't know this, which it could in a way because that means that, yeah, transparent means that you don't have to fix programs or fix other kinds of software that you want to install because it just works this way. And in most cases, this is indeed enough to indicate that this is not as simple as it sounds if you want to support multiple targets on Mac OS, for instance, in recent versions of Mac OS there is an SDK, there's no longer a USR on the system with lib or include. So we tell the compiler and the linker actually to look into this SDK path which is how long path can be configured during whatever kind of installation of Xcode. So we do something special there to make sure that it finds it and that basically you as Gen2 Previx user don't have to bother about it. There is actually a quite difference in how the linker responds. The Solaris native linker also, for instance responds quite different to the glue linker yet most C projects, open source projects in particular intend to assume GNU LD. So we add flags like search pass first to basically emulate the same behavior as we see with GNU LD which also just makes it much easier to port programs, right? Well, in the Mac OS file format you don't have this ARP we actually have ARP in recent editions, but they actually record full path so they don't have the problem that we have with ELF. So that's nice, but it also means that in that case it actually isn't doing anything special there. So basically it contains all kinds of alternatives that it knows for the current path such that from a user's point of view as a Previx user it works the same on Linux host as it does on, for instance, the macOS host. Yeah, so basically how we would do this, this example that I had before is, in Gen2 there is a tool called emerge. Emerge is basically the tool that like ops gets or install or it is the one that you just say I want this package or we move this package. So in our case we just emerge YAML or live YAML or whatever it is. It will do its thing, it will from source compile the YAML, it will install it, it's all set up for the video offset directly in the Previx. And then I can just run this simplify GCC compile command compile my application use YAML and the resulting application will run because it is just so to speak fixed to the right YAML that comes from the Previx location even if there would be one in the host system it would ignore it, it will look at it. So that is pretty transparent and that is pretty nice but yeah, we need to do some tricks for that and that's why we always have a tool change so we always provide a compiler and something that does the linking. Without it, the whole thing falls apart of course because then some of the compiler cannot find headers anymore or the linker cannot find libraries anymore or for whatever magical reason it does work but then it fails to run. So like I said, with this most simple is not even true but most packages that don't do anything specific like detecting compilers or figuring out on Solaris there is a USRSFW path without all that kind of trickery most things just work, which is pretty good because that means for free you get everything that respects your offset prefix and grabs all the libraries from there. Of course there is these things that are actually trying to be smart such as CMake, CMake is an all-time favorite or horrific thing, what was wrong with Autoconf it was too slow, okay, so you create something new and then of course you create something that is trying to be smart. So we need to patch most of that smartness out of there such that it just respects, hey look, there is an offset in this offset you find all tools that are sufficient to correct up to date having the patches that you need. So please don't try and invent something yourself. And if you've done that, right, then there is many people are looking for development when they look at Linux, that is not totally the same. It's not always POSIX compatible. If you try to port something to MacOS or Solaris you will find there's differences and sometimes you need patching. That is an ordinary porting. You always need to do it with the, yeah, I would say popularity of MacOS. MacOS porting is rarely necessary anymore these days, right? Because all people already have the patches in there. They already know that it's also much easier these days to run it Solaris, yeah, as long as you stick to POSIX it's fine. Not so interesting for people anymore as a platform these days, but okay. The amount of porting that you really need to do for platforms is really, really, really long. Realistically speaking these days, it's almost never. The times that we had to do something because we had a CPU that wasn't quite working with online access or something that's really over. You rarely see issues with that these days. Yeah, so basically Jota Prefix lives on top of another operating system, on top of another distribution. We call that the host. The host is what we live on top. Yeah, the first perspective that we have on the host is that we accept that it has a kernel, it has a C-Library Lixi, right? And the rest, we all try to provide where possible. C-Library for us also includes things like the math library and which is the real time, those kind of things all belong to, okay, they are part of the system. We cannot replace those very easily because we don't always have the sources available or we just think it is incredibly complex to get that working and we don't really need to, especially Solaris and MacOS have, during their releases, extremely stable APIs to kernel and the C-Library, unlike Linux, and you never know, there is no version of Linux that represents to something that is something like CentOS 7 or CentOS 8 or W or whatever, but it is not, that is all different distributions. So it's harder to manage. And that's why we have a special view on Linux hosts where actually in the end, it turns out that we cannot replace kernel because we don't virtualize, we don't use root, but we can actually provide our own C-Library because G-Lib C, well, it's all open and the build, Gen2 knows very well how to build that, of course, so we can do that. That means that on Linux hosts, we can go one step further and even provide our own C-Library, which makes porting of applications on, especially older hosts much, much more simple. It comes with a downside, of course, that you're no longer compatible with the whole system, but okay, if that's not your goal, then it allows you just to install and run the software on an old system, just for you. Yeah, so we call this, it has a name when we actually use the C-Library G-Lib C on a Linux host that's called RAP, RAP stands for RAPAIM prefix. It actually is, but well, it isn't as well. And it works well on very ancient systems. The good thing is that you don't need RAPAIM prefix because the G-Lib C comes with its own LDSO, which is the runtime linker, which is basically what we do is every binary we create, we tell it in its executable or interpreter to use our own runtime linker and that runtime linker knows where to find stuff. It actually reads it from its own LDSO to hop, which is a very nice trick because it hides even more the details of what you're doing. And we can even use more native streamlined versions of the linker, for instance, because we don't have to patch it. But yeah, so that's something that's really nice. It's a bit more native and it allows you to do more things just because you can upgrade the Linux. Yeah, so just a little bit about, okay, what do we support today? Let's start with the sad news in, I would say, in the past, we supported things like AIX, HVUX, IRIX, there even was an Atari port. That's all really cool, compact projects, but the reality is that we no longer, no one has access to these things anymore. And even if you do, there is absolutely no use for it anymore. It's all so slow, it is impossible to work with. And it costs a lot of resources to maintain them, especially target like AIX is always different, always problematic. So we don't do those things anymore. The things what we actually do these days are Linux, Mac OS and Solaris. They are fairly well supported. We learn over a couple of versions of Mac OS and just because I find it very fun, I even keep an old G5 machine in my house that I run the PowerPC Mac OS, which is a 10.5 on, but that still works fine, right? So we compile GCC 10.2 on that thing and we compile our own linker and then we have an environment that is almost identical to what we have on our latest Mac OS system. And that does really fine. Well, yeah, Linux support has the big advantage that it gets all of the support from gentlemen. Some gentlemen runs itself on our plethora of architectures, ARM, PPC, PPCLE, risk fee, whatever, you may make that they do it. And in principle, in prefix, this is all inherited. If you have a running Linux system on whatever, let's say PowerPC, as I know some of you do, you basically, if you need any patches for that, if you need any patches at all for that, I would say you get them straight from gentle mainline because we are just about providing the base, the tool chain and for the rest, we really rely on gentle mainline. So most of the packages and stuff all comes from the vast base of gentle developers that works on keeping those up to date and making sure that they are actually behaving and working and installing exactly as you want. Yeah, so a little bit, this is something that I dug out of a very old box. I would say it basically shows six years of where we track the amount of packages that we would support and in the right, it's very small, the legend, but it basically lists a lot of platforms that I had at that point available. And yeah, I was able to access even which includes a couple of the Windows platforms. And well, anyway, the mint one is the Atari one. The fun out of this whole thing is that if you look at it very carefully, you see that on the left, then it all started the PPC Mac OS one was the top leader. And it actually towards the end still is. That is of course because that was my Mac. And yeah, the Linux ones that jumped in somewhere at 2008, that is because we joined it with the main gentle tree. And of course we inherited all the Linux support which originally was all x86 and then at some point in D64 to go over, of course. Anyway, that's just a bit about the way that we grew. Nowadays what we do is we try to be as much as possible staying along with main gentle. So a few things that we need to change that we could not bring back yet, only those we keep. So yeah, we only try to support what we think can. So we drop all kinds of platforms that are fun to have that we can obviously not support anymore. And yeah, there's some semi-automated bootstrap results which means we just test whether or not the whole process of bootstrapping works. Bootstrapping means you start downloading source packages and compiling them. And gradually you build up more and more tools such that you can actually have the gentle package manager this merged thing which is called Portage. And once you have Portage, we actually continue using Portage to install package one after another in a very careful order because there's dependencies of course and we try to avoid these dependencies all the way up to the point that we can tell the package manager, hey, why don't you reinstall everything and bring yourself up to what you consider as a system state and then basically at the end of that everything is so what we call self provided. So that means that everything is using the tools that were provided from the prefix itself up to the minimum requirements from the host, of course. This is a process that takes a couple of hours. You think of the fastest machines. Yeah, it is something within really a couple of hours but it's compiling a lot. It's compiling compilers that take a fair amount of resources just to give you an idea on the old PowerPC Mac it takes something like eight days to complete this whole process. You don't have to do this very often, of course you only have to do it once in principle. It is a long process on old hardware on new modern hardware. This is all not that long but it takes a fair amount of computation compiling of the sources but then everything is ready and set for your environment, so to speak. So, gentle prefix, it is part of gentle proper. It is an official sub-project. We still have some separate stuff because we need to override certain packages and we don't know yet how we want to do that for real. That sounds more severe than it is. Like the compiler changes that I mentioned and the linker changes, that just needs careful testing and careful documentation and careful arguing because the main gen two maintainers are not too happy about just accepting something like that if it isn't tested. Now we have tested this for quite some time but of course we also need to test the effects on normal gentle systems and to make sure that we really don't mess with anything. So those are the things that we currently have aside. This is not an issue for the RAP targets or more native targets because they don't have most of these integrations necessarily, so long story short there has been some documentation tracks which we call EAPIs. There are enhancements, request documentations about and yeah, basically that just defines that these build scripts which we call eBuilds which are shell scripts, they have some variables available to them to say, hey, this is the offset location. So should you need it? Should you need to tell here you need to look? Then these are these variables and that's all defined and that's in there for a couple of years now. So I would say that is well accepted and we are slowly integrating the remaining work that we have with the main gen to repository. Yeah, so we have a website with some information about how we do it, what we do with the platforms that we have that is, well, nice for some reading. We have the general bug tracker for which that's the best way to, if you find issues or something, the best way is to report them there. I will not promise that you get instant responses but I always try to at least figure out if there is something simple to do or not or that we need some help because obviously we have a lot of weird scenarios if you have a system administrator that configures your system very weird in a weird way. It may be that we just don't know things like Pearl or Python trying to figure out themselves a lot of things and they might just get confused and that's a situation that we cannot easily reproduce. But in general, I think things like bootstrapping on CentOS or Debian or new boot to Arch Linux, they all go fine without too many issues. But anyway, if you have an issue and you want support and you cannot find it to IRC or whatever, you can just find them out here. And that's basically all I wanted to say about the history of the project and where we are today and what you can do with it. So thank you. Thank you for that talk, Fabian. If there are any questions and you're in Zoom, hand to Kenneth first. Let me enable my camera again. I have a couple of questions actually. So if somebody else has questions, feel free to interrupt me so other people get the chance as well. One thing I was wondering, you mentioned CMake as being, how did you call it again, horrific? I agree with you. It does a lot of things under the hood. It tries to be very smart and if you're not in whatever environment that they expect you to run and then you're in trouble, which is of course the case in general prefix. I've seen the patching you do to CMake. I've looked into it myself as well. I don't want to explicitly tell it, not to look in user lib and has all these hard coded stuff in there. Have you tried or have you considered contributing some of those patches back to CMake? So it has an easy option that says look, whatever you think is sensible by default. Don't do it and just only look here and don't look anywhere else. The problem in CMake is not alone in this. Python is another extreme example of this and what they Python, DevSprings and say, screw you, go away. We don't care. We only support vanilla machine. That's what we built for. And if you do something else, then we don't support that. We don't want to support that. We don't want to think about that. And from their point of view, honestly, I can sort of, that makes sense to me. I think, I honestly think for Python, you know, they try to do too much. They should rely more on distributors, but okay, they don't, right? And CMake does the same thing. And I think that's also because of their, well, their commercial company basically providing this. And I think they have an approach here. It's the same with Adobe's Boost. And they have an approach that is just like, we want to compile it on those systems and then use it to do whatever we do. So whatever we say, they are not, they're reluctant to use our patches because they just feel like it complicates stuff. We don't care about it. You shouldn't be doing this. So the only solution then is, and yeah, that is why Gentoo is a meta distribution, is to patch the sources to our lighting to make sure that they do what we, that they are usable in our environment. And I think that is the great thing mentioning Python and CMake. I think there isn't a single distribution out there that does not apply patches to these kind of packages. And they all do the same. So these vendors know very well that they do not work in any realistic scenario, but they don't care, right? Because it's not their intended output. And everyone patches it anyway. So the problem is solved, right? Yeah, the problem is moved to the maintainers of the project. Try to get everything back to the maintainers, which we call upstream. That's for most of it. Many, many upstreams. And there's two types of upstreams. Oh, thank you. That's great. Wow. No problem. And there's others that say like, yeah, but I don't like this platform and I don't care. Or yeah, can you sign this thing? And oh, well, I think this is difficult. Maybe the next week. So yeah, what can you do? Yeah, okay. And I see Bart has a question as well, but I have one that I think is important. So you mentioned the bootstrapping procedure for prefix where you build the tool chain carefully from basically from scratch. Up until you get a working portage, and then you can emerge everything you want or need to. What we're doing and I'm not, I don't want to spoil Bob stock who's coming up next, but what we're doing is. And the easy project is we're bootstrapping in, for example, sent to a seven virtual machine, machine essentially or container doesn't really matter. And then we're using that bootstrap prefix on other operating systems that are similar. So in Debian, for example, or other Linux distributions, but we do stay to the same architecture. So if we bootstrap on x86, of course we stay on x86. Is that something sensible to do? Or do you say he'll have to be careful because things may be picked up from sent to west that do not work on Debian. No, I think that maybe this is interesting in a way, but so what what we did. So what you're doing is nice is interesting. And I think especially when you provide the lip seat, and actually this will work very well, because the only interface you have is with girl. So kudos for that. What we sort of gentle can also do bin packages. This is slightly related. What we did in the past is we actually allowed you to build a binary in a prefix and then actually install that in a different prefix on a different system. And it would actually adjust the prefix. We stopped doing that because it has a lot of issues. So as long as you keep the prefix path the same, you're good. If you change this, of course you should also use a compiler flags that say you generic CPU. So if you try to change the prefix path, you run into all kind of very, very. Yeah. Complicated issues because the paths are very often hard coded everywhere into the program. So you don't do that. We went away from it. We had to sophisticated software solutions to try to relocate binaries, but we went away from it. We went away from it. We went away from it. It is not reliable enough. You better just bootstrap them for a different location. And you say, make sure you build generic, but I think that's what prefix does by default, right? Depends on the profile. I think the default flex is minus or two or something. That should still be generic. But for instance, in the Mac profiles, we use minus and arch equals native. So if you want to, for instance, in the Mac profiles, we use minus and arch equals native. And if that would be the default on the Linux profile too, then you are going to build for the processor that it sees. Right. So you don't want that. You definitely want. I think minus oh two at most. And that's it. Yeah, it's not using a much native on Linux, but we're sure that we would have seen it already, but but it is, it is doing that in Mac OS. Yes. Yes. Okay. I'm going to go back to Mac OS here because on Mac OS, there is, well, Mac OS controls is ABI so, so thoroughly that we could just do that because you have to press call it in two quarter duo. And the next CPU would be able to support those instructions. So it would be a plus one basically. Okay. But you have to bootstrap on the oldest hardware. Yeah. But the thing is that if you say Mac OS X, so Mac OS 10 point, whatever comes with an SDK that actually runs the Mac OS minus version and the compiler actually knows that it only contains that CPU as minimal. So it will not do anything. Yeah. Apple did a great job in many things. It also did a not so great job in many other things. But some of these things actually are very nice, but that's beside the point. You just run into it when you do something like this. Okay. Yeah, I'll pass the mic to Bart who has a question as well. Hello, Spart speaking. My camera is not here. So you have to just hear me. Can you hear me? Make sure. Yeah. Okay. Yeah. This first thing. Just to say it's a great protection to prefix. We're using it in compute Canada. Right now is like opt in on the cluster, but we're making it default environment as of April. And so what happens is that people load it via environment modules. And then it, it overrides most things of the underlying sent to us. Distributions. And that's all distributed files even best. So it's, it's read only. But while we're, we're doing tend to prefix installations. We've gone out. We have to override various kinks in the cables of the seas on small issues that pop up. We've also shared that with the easy product. And one thing is that on CBMFS, everything is read only, but tend to prefix put everything under the prefix, including coding things that's past that start with slash far. They're putting in prefix as far. And sometimes that doesn't work like we have tools like W who finger, who am I, all these things they have to do with login authentication. They can read files that are written by the host OS and slash far, but they don't read in far. They read in prefix as far. And these days are empty and there's nothing in there. So we had to, to basically put in some symbolic links to work around that, which seems a bit ugly. And I'm just wondering if there being some thoughts into that and, or should I open a ticket to have some discussion about how to fix that properly. Yeah, this is a very interesting. So the initial objective was do as little as possible with the service system because it is possibly crap. Basically, what you're mentioning is, is, is some tools that finger, I think is a long old deprecated tool, but okay, those kind of things. That is indeed where you, where you question is prefix the right approach that you want to do. It is writing. Yeah. I, I, I, I, I, I, I hear that finger is old, but on a cluster, who is a comment that execute many times because you want to know who is logged in on the cluster. Yeah. Just don't get the same information. Yeah. And I'm actually wondering because I run. I run who also a lot. I'm just. Okay. I provide who and it just works. Yeah. I think we have who and I don't understand. I don't know why. Who doesn't work for you from prefix without modifications, but I do think that. So long story short, I think you have a very specific scenario. Making seemings in that scenario is, is not a bad. Heck, so to speak. The thing is if we can identify the underlying things that you need, we could see whether or not we could make that convictable somehow. But on the other hand, you say you have a real root. So perhaps it needs a bit more though. Anyway, if it is, it certainly might be interesting to actually have a look at this. If we can pinpoint what exactly you need. My gist is that you do have a very specific use case of it though. And we can always try and see if we can provide alternative. Something over there. Exactly what you want. And I think it's very specific in the case that it's probably a setup. That's specific to HPC sites, but it's an, it's a problem that any HPC sites will have. So it's specific to HPC, which is definitely niche. We're fully aware of that, but it's still, it's still a lot of people. It's not like a single use case. What happens in this case that who will sort of work, but it will not give the same information like it will just give the list of usernames. But who has some extra information like where they logged in from. So they tell the IP or the host from where people logged in from, like their IP or the university. And that information is stored in a file. Slash far slash run slash U TMP or W TMP, one of these files. And the host always stores the information in there and then who reads in terms of those files. So the only thing is like, we need to sort of either have a symbolic link or teach. Tent to prefix. And I think that information actually comes from T-lipsy. So you need to put in some flags in the T-lipsy compilation. So I'm just wondering, yeah, we need to have some discussion. Just getting the two technical for this talk, but what will be the right place to approach you guys? Is it a book? Interesting one, right? Because it works for me. Without any, it gives me exactly the same information back. But it will be interesting to see why. And maybe the base layout can just provide this because U TMP, that one can probably be siblinged once during the setup or something. Yeah, that's what we do now. Yeah. By means started discussion on this. I know that Benda is also in HPC land. So, and I'm, you know, so they, they might be having some good ideas about this. Okay. Thank you. Any other questions by anyone else, Simon? That seems to be all the questions I can see at the moment. I have a couple more, but maybe it's better to move to the breakout room so we can, oh no, actually have time, right? We have time. Yes. We have a couple more minutes here if there are. Well, yeah, it's tend to, so let's make sure Bob again can get set up and we'll wrap it up here and jump to the breakout room. I have a couple more questions for Fabian if he's up for it.