 All right Well, I'll be talking about the multi-axial project today also about easy. So I will explain a little bit. What is easy? What is multi-axial? How do they tie together? And thanks to Bart because you already gave a very nice introduction about CSMFS and things like that. So I can probably go quickly through the slides that cover CSMFS. So my name is Kasper van Leuwen. I work at SURF. I've been working at SURF for the past six years now. It's an HPC work, some high-performance machine learning. And if you've seen my name on GitHub somewhere floating around, I already had a couple of people who said like, hey, I think I've seen your GitHub tag somewhere. That's correct. I'm also an EasyBuild maintainer and I made some contributions to EasyBuild itself as well. So yeah, if you want to complain or talk to me about our pass support or easy stack files, those are the things that I mainly worked on and of course some easy contributions. Before I start, there was a talk yesterday that was covering, you know, how difficult it can be to maintain a local software stack because of the scientific software, you know, it doesn't really adhere to any versioning standards or, you know, even explain how you should install this piece of software, right? People just take the script from somewhere. They find some GitHub repository in the publication and they think, oh, I can use the same thing, but it contains zero information on how to install it and how to get it running. There was actually an initiative in the Netherlands, not by us, but by a sister organization of SURF, to help scientists a little bit think about this in advance and to write what they call a software management plan. So it's an interesting read if you want to try to stimulate your scientists to do a little bit better on the software development side and think about these things in advance. It could be nice to point them out to this. So small disclaimer, I was on the sounding board for this guide. I'm not completely independent. But one of the tables that is, you know, to me was quite, quite useful is basically this all of the things that you should think about, right? But then, sorry, it's a little bit small, I know. But it also says like, okay, you know, how deep you should go into your software management also depends on what you think the expected impact of that software is, right? Is this a script that you're just developing for yourself? Is it going to be used by one or two colleagues or is this going to be used by tens or hundreds of people also maybe outside your department, right? Those require different levels of software management. And deployment documentation is one of the things that being pointed out to the scientists is being an important aspect to think about. Alright, so on to the actual topic. I have a dream and my dream is that scientists can run their computation on any computer infrastructure they want with whatever software they need on any data that they want. And of course, I'm an HPC person also making the most efficient usage of that computer infrastructure. And I don't think I'm alone in that dream, because EC also has a dream, which is to provide a cross-platform ready to use optimized software stack. And by that we mean cross-platform, which run on anything from a laptop to an HPC cluster, just as Bart mentioned. It should be ready to use, so it's just mountain go, right? There should be no effort in getting this thing installed or anything. It should be optimized, so it should make the most out of your hardware. So it should recognize what CPU architecture are you using and do something sensible for that CPU architecture. And of course, it should be a stack of scientific software, because that's what we want to run, right? So basically that takes already two of my boxes. So for me, this is an important project. You know, if we get easy going, then a sort of halfway there for my dream. We're not doing this alone. There's a big group of people involved. You recognize some of the Dutch universities, also University of Cambridge. But also we see cloud vendors that are very interested in this big HPC centers throughout Europe are interested in this. So it's definitely something that, you know, that falls into further ground this idea, right? I think also what a lot of people here will recognize is that HPC centers spend a lot of time managing their software environments for their users, because it's a complex environment. And of course, the focus on performance for us is important because we know that the users run big calculations and if they can do this, even just 5% more efficiently, that can mean a lot of hardware that is associated with that and therefore a lot of money, right? So it's important to think about this optimization. At the same time, we live in an increasingly complex world. So there's more and more research software out there. We get, at least at SERV, we know that we get more non-traditional HPC users who often also lack HPC experience, especially from the machine learning community. A lot of people, for example, from social sciences who've never used HPC systems before, but now they want to do some machine learning and they actually need quite a fair amount of compute and they now land on our cluster. That's a hard user group to serve. And then we also have more and more flavors of hardware out there. So all in all, it's a daunting challenge for HPC stuff. Of course, this is not new, right? I mean, it's one of the reasons we're here. We're all using EasyBuild. There's other build tools we have spec. And this is basically our effort of sharing that burden a little bit with other people from the HPC support community, right? We can standardize our build recipes according to some format and it doesn't matter if that's spec or EasyBuild, but it allows you to share what you know about how a certain piece of software should be installed with the rest of the community. But each side still installs their own stack. And if you want to do proper software testing, you kind of need to do that yourself, right? So yeah, we share some of the effort, but there's also some of the effort that we don't share yet. And if you look on the EasyBuild Slack, you'll notice, you know, even if we have an easy config, build procedures don't always work out of the box. You might be running on a different operating system than the person who developed the easy config and suddenly for you, you get a problem. And that is one of the ideas of Easy is kind of to take that next step in sharing this burden, right? Rather than working on build recipes and then doing the build ourselves on our own systems, why don't we work together and actually build one copy of the software stack so that everybody can just use that same couple. Also, I think there's a big benefit to end users, not just us HPC support people. But for the end user, if you want to move from one system to another, and this is also something that Bart already mentioned that is important in Canada as well. We get the same request in the Netherlands from the tier two sites, so those are the local universities, they often have their own clusters. And what we see is that users come from there, right? They start there and when they outgrow their own cluster, they come to us. But it's a lot of hassle because it means they need to recreate their own software environment. They need to move their data and the data moving part we cannot solve here. But it's usually the software recreation part that takes a lot of effort because they try to set it up in the same way, but they still get slightly different results and you have to figure out why, right? They get a certain mismatching version or whatever or something else a configure flag that was slightly different. Yeah, that's a lot of hassle. So it would be much nicer if they can just use the same software environment on any of these systems. So what is the scope for easy? Well, we want to create one share repository of my software installations. We want to try to avoid as much as possible duplication of work between IP support teams. We want to provide a uniform way of providing software to our end users, regardless of which system they use. And it should work on any Linux OS and we've also tried this on the Windows system for Linux and possibly will make it work on Mac OS. I don't think that works right now. And it should work on a wide range of system marketing. We cannot know which cluster user will want to run this on. It needs to cover a wide range. And then we'll have a very strong focus on performance, also on automation, testing and collaboration. One of the questions that we often get is like, okay, well, portability, that's nice portability of software isn't that solved by containers. That's something it's a question I get a lot also from my colleagues with inserved that work more on, for example, cloud computing and they're like, you know, containers are the solution to everything. Well, to me they're not and there's a couple of reasons for that. And there's probably more I just list a few here. Containers are designed for portability. So yeah, they tend to be very portable. But that's also why they are built typically without any hardware specific optimization. Because you don't know what hardware your users going to run it on. Those containers are usually generically optimized. Of course, you could make a different choice, right? You could build multiple copies of the container, etc. It's solvable, but nobody does. They're quite large and bulky, like containers easily several gigabytes. So if you just want to use a single small tool, you have to pull in a full container just to use that one tool. It's not very efficient, not very friendly on your networking either, especially if you, you know, you have a lot of users doing this on your HPC system at the same time. It's pretty static environment. So if you have a container as a user and then you think, oh, I need this one additional tool. What are you going to do? You're going to rebuild the container or you're going to pull in another one just to provide that one tool. Not very flexible. And it's a lot of duplication. Essentially, each container is a full software stack. Does the software in that container do what you expected to do? Are you going to test that? Are you going to test that for all the containers that you work with? And essentially your testing problem is no longer testing one software stack, but testing many software stacks that makes the problem even harder. So, yeah, we had a look around and we figured, okay, let's, you know, let's not throw away the good part, right? So containers are isolated from the host because they have their own OS. The part already mentioned that is exactly what the compatibility layer also does for the Alliance and that is also why we use a compatibility layer as well and easy. And of course, we also had a good look at what the Alliance is doing. They have a shared software stack that works between their systems. So we had a very careful look, what do they do? What do we want to do differently here in Europe? And of course, they do it for their systems. We want to make it a little bit more community driven and make do things like community contributions, etc. And also support a wider range of architectures, right? I mean, you know what kind of systems you have in Canada. Here we want it to basically work on, well, any architecture is a big word, but a lot of architectures. Right there. Well, we take a big group pot and then we put some ingredients in the first one is Gen2 prefix. So that is this compatibility layer. It provides abstraction from our host OS, as is also done by a container OS. Second ingredient is EasyBuild. It could in theory be any build tool, but we use EasyBuild simply because a lot of the people in the project are familiar with EasyBuild. And this was to do optimized builds for a large range of range of hardware architectures. And there's also, of course, a large scientific software that is supported in EasyBuild. Finally, we need a way to get it to the end user system, so that's where we use the MFS. And this was also already mentioned yesterday, right? ArchSpec is a tool to detect what is the architecture of the machine that you're working on. And in Easy, we use that to determine, okay, which copy of the stack are we actually going to search. So we basically have multiple copies of the same software stack optimized for different hardware architectures. And then at runtime when you mount EasyStack, we'll decide, okay, you are running on Skylake-based system, you know, we'll point you to the Skylake prefix with all of the software that's in there. And this is another view, abstract view of what it looks like. So you have a host operating system underneath, right? And we have the file system layer, which is this Syrn-PMFS that takes care of the distribution of our software stack, the compatibility layer to provide the isolation from the host. And then finally the software layer that contains all of the end user applications. Of course, there are some connections there, like Bud also said, like they have this gray area, right? So it comes to drivers when it comes to an MPI, and you can discuss where that should be, and maybe you should be able to use the host MPI because you know that that works better. So we're also looking into how to support those kind of things. A little bit about Syrn-PMFS. So it was developed by Syrn to distribute software on the worldwide computing grid. So actually the way in which the alliance in which easy user and PMFS is exactly what it's meant for, right? It's just a POSIX read-only file system, which is mounted in user space and you, yeah, basically you can serve it over the web. It's a web-based file system. There's a strong focus on redundancy and IR performance. You can set up multiple layers of caching, multiple mirrors. And of course, that's also needed like in a worldwide computer grid that also involves a lot of nodes, right? So, yeah, they need to have sufficient redundancy there as well. And a nice thing over containers is that this pulls in files as they are needed, right? So you can pull in a single script from the EC repository if that's just the one script that you need. You don't need to pull in the full file. And anything from the compatibility layer that a script uses, which you don't need to pull in the full software stack if you don't need it. A little bit of a graphic, what it looks like. So the stratum zero basically contains master copy, right? That is sort of the determining factor of what is in the software stack. The stratum ones mirror that stratum zero and they provide the redundancy and the load distribution. And then there can be several layers of caching. So the first proxy in between you can have a local cache on the system itself. And that is displayed here and then anything, you know, you can connect an HPC to it, you can connect the laptop to it. It doesn't matter. So the compatibility layer. Also, we use Gen2 prefix because it's convenient. And also because we had a good discussion with the Alliance, you know, why not mix. And it basically is limited to the low level stuff. So things like G-Lib C are provided by the Gen2 layer. There's a couple of low level dependencies that we don't use from Easy Build anymore, but from Gen2 prefix. And we support currently three processor families, so ARM, PowerPC, and X86. And it basically creates a level playing field for the software layer that can be built on top. That's where we optimize for different micro architectures. We have basically different prefixes where we optimize for, you know, Gen2, Gen3, Graviton2, Graviton3, the different Intel architectures, etc. So all of this is installed with Easy Build, everything that's in the software layer. And the best sort of directory for the host is automatically selected by architect, as I mentioned before. So, you know, your end user doesn't have to know about what hardware they're running on. They can just mount this even with S-Tec and go. It's also very useful if you don't know which hardware you're going to land on. So if you have a heterogeneous partition, we used to have that at surf, kind of annoying to then get the right optimization for your software. This makes it very easy. We have a proof of concept going. There's a pilot stack there. It doesn't contain a huge list of software because that was not the point of the pilot stack. It was just for us to learn and show how this can work. There's a CERN VMFS start from zero running in Groningen, and we have four start from one servers. Then we have a couple of software packages just to illustrate how it works. And the hardware targets that we support are listed underneath here. So it's basically the most recent of Intel, AMD, and ARM. This multi-scale thing that you might have heard in the hallways. I cannot already mention that you're doing yesterday. So multi-scale is a European project. It's a Euro HPC center of excellence. It's focused on multi-scale modeling. And we have a 6 million euro budget, but across 13 sites. And it's a four year project. We just started this year. And it's essentially a collaboration between the SICOM network, which is the center for atomic and molecular simulations. They do a lot of scientific software development as well, and several partners in EZ. And we have five main work packages, three of which are technical. So they are really doing code development for this multi-scale modeling. And two of them are technical and they are basically about developing and supporting EZ. And through that, we want to facilitate the scientific work packages. So what do those technical work packages look like? Well, the first one is really about developing the EZ software stack, developing additional features, make it more mature, add new architectures to it, et cetera. And the second work package is more about maintaining and monitoring the quality of the software stack, making sure that the performance today is also the same as the performance last week, and about processing community contributions that come in. Some key benefits here of the multi-scale project to EZ is that multi-scale has dedicated funding. EZ does not. We used to just work on EZ in our spare time, on our Friday afternoon, whenever we could find some time. But now we actually have some dedicated resources to work on EZ. And the nice thing is also that the project plan that we wrote for multi-scale also gives a little bit of a roadmap, a little bit of direction to EZ to the project. Because we know what multi-scale will be working and focusing. Also in multi-scale, of course, we have this connection with the scientific work packages. So these are basically end users of EZ, right? And that gives us a nice feedback loop. We can talk to these people and ask them what works for you, you know, what barriers are you hitting when you're using this app? We also want to stimulate, you know, to make EZ available on more clusters, definitely within the consortium. We have a couple of HPC centers there that will make an effort to make it available. We'll also start conversations with the admins of the Euro HPC systems to see if we can make it available there, what would we need. And finally, we'll provide training for admins who want to roll this out on their HPC system. Like, you know, what do you need to do in order to make sure that this actually performs also when a couple of hundred nodes are using the software stack. Also for end users, how should you use this as an end user? So then a little bit about what we're working on today and in the near future. So I'll start with the file system layer. On that part, we're trying to get a new start from zero server in place, physical one this time, so we can actually plug a UB key in. This is one of the ways in which we want to improve the security because if you can deploy something on the start from zero, you can deploy it to a lot of systems in the world. So that's a security thing you want to think about. I need to properly secure that. And it's also pretty requisite actually of the certain BMS developers. If you want to be part of their default set of configurations. Oh, I see Ken. It's sort of self involved. Okay, we want to do a good job before we ask them, can we be included there in the set of default configurations. And that will also help us create more impact. Because if we are in the set of default configurations for CVMFS, anyone who installs the CVMFS RPM or WM package, they will also get access to the easy repository. In terms of the compatibility layer, but already hinted at some collaboration here. We've been trying to create a new version of the compatibility layer and issues along the way. This is a script of Gen2 prefix that you do this installation of the compatibility layer was giving some issues for X86 and for ARM that now works as far as I understand. I've not been working on this myself so some of it is a little bit the details are not known to me. But I do know that this comes also from collaboration with the Alliance. I know that but actually looked at the X86 problems. Bob from easy actually looked at the ARM problems. And I think this is, you know, even if we don't use the same compatibility layer, at least we can benefit by working with the same tool. Yeah, just to repeat. Yeah, for the audience back home, Bart remarked that it's also a collaboration with the gentle developers themselves. We have a good connection with them, both the Alliance and easy. And I've seen a big, big push of the past months is the way in which we process community contributions. So this was sort of a slide. This was our aim. We wanted to build this type of pipeline where we have a contributor who says, okay, I want to, you know, deploy a certain software package in easy. And then what, like what kind of review process does that need to go through how do we build that software for the different architectures, etc. You know, step by step, because that's, that's where most of the development has been. So basically the idea is that the contributor first does a full request to the software layer and says, okay, I want to add, you know, this package this easy config. Then there's a reviewer on the bottom left. He checks, you know, if the same is there nothing weird here. Today, he can add the bot build label to the GitHub issue to get a PR. And then it gets picked up by a bot that's running in AWS. This is running in a virtual cluster on AWS and spin up a lot of different types of nodes with different architectures. And those actually do the builds of the software on these different architectures. There'll be thanks in tarbles. And that is basically where this first step ends, and the book will report back to the PR and say, okay, you know, I successfully built this table for this in this particular architecture. Then the next step, the reviewer can have a look again. Does it still make sense is still okay, you can add the pot deploy label. That means these tarbles get uploaded to the S3 bucket. And again, the book will report back about this step. Oh, I did the uploading. It's now there in the S3 bucket. The first step is a cron job that is running on the start from zero server that is pulling the S3 bucket for new tables. And as soon as they're a new table, they will download it to the start from zero. That will not immediately deployed in the CVMFS environment. So it will not yet be probably available that it will just be on the on the same server. The book will also create a PR to our staging repository. That's easy staging. Again, that gives a reviewer a chance to look at it as a list, which modules are going to be added by this PR. And if the reviewer says, okay, this is fine. Just merge the PR and the cron job on the start from zero sees this PR is now merged. I can now actually ingest it into the CVMFS repository on the start from zero. It will be available to all of the users who have access to the easy software stack. Another thing we've been working on is the GPU support, specifically NVIDIA GPU support. There's a couple of challenges here. Of course, we have to deal with the end user license agreement of FIDA, which does not allow us to redistribute everything that would be in a normal, easy for FIDA module. What we do there is we have a script that checks in the ULA, which files are we allowed to redistribute. Anything we're not allowed to redistribute is replaced by a sim link, which points to a special directory created in easy with the host injections. But this is basically where you put anything as a, as a system admin that should be picked up from the system. So you can put an additional full FIDA SDK at installation there and then it gets picked up from the system. Second challenge is that if you use a very new Cuda library, it doesn't always work with all the drivers. Of course, on the easy side, we don't know what GPU drivers are being used on a certain host. We cannot really help that either, right? But we can make it easier for the host system to support newer versions of FIDA. You can install compatibility liars that increase this compatibility range a little bit. It's still not indefinite, right? But it's a little bit larger range, so we provide a script that makes it easy for you. And again, this is installed in this host injections directory. And the third challenge, and that's one we haven't solved yet, that's still a work in progress, is that easy actually uses a built container for technical reasons that I won't go into. But in the container, we have to mount the Cuda drivers and that by singularity that's mounted in a non-standard location. And that's currently giving some installation, so we need to figure out how to deal with that. Software testing also mentioned before, we actually use reframe. So we have a nice presentation on reframe yesterday. One feature that was mentioned was that you can now specify what the features are that a certain partition supports, right? So we have a lot of requests from us, because basically we want to separate anything that is system-specific into one configuration file, the reframe configuration file. And the tests should be completely, like they should know nothing about the system, right? Because we, the test developers, know nothing about the host system that this is going to run on. So they should look at the configuration file and make some decisions and do something reasonable based on that configuration file. So very simply put, if in the configuration file it says this is a partition that only has CPUs, right? It doesn't have GPUs. You should not generate the reframe GPU test. Similarly, you might have an application where it makes sense to, you know, which pure MPI where it makes sense to typically just fill the node with MPI ranks, you know, one rank per CPU core. And it can take the core count from a reframe configuration file and just fill it up. But these kind of sort of standard things are being done here. We created one blueprint test based on Gromax, because Gromax, well, you know, supports CPUs and supports GPUs. So it was a good use case. It's something we know very well. And we try to make this a blueprint that we can use later to create more portable tests. So all of the system-specific part has been taken out of that Gromax test. We try to generalize it as much as possible with reusable components in there that can also be used in other tests. For example, this, you know, launch one rank per core that you have that can just be a standard function that is probably useful to more tests. And this is actually so trying to make this portable right now it is as portable. I can actually run the same test suite on my own local module stack. That's also easy build. So, you know, the naming is kind of similar. So that helps. But yeah, it shows some of the portability here. And I think part if you guys want to use this, I think it will just work. At least that's what it should do. Question in the back. Yes. No, so. Yeah, so this is a good question in the back. There was a question that reframe tends to also be used to test performance. For now, we're only looking at functionality. We also want to do performance testing. But that is even more difficult to do in a portable way, right? Because what is the expected performance of a Gromax run on a system you don't know. That's going to be a hard problem to solve. So either we do something and make it easy to, I don't know, point to a local database which contains the reference numbers for, you know, what the performance of that test should be, or make an educated guest based on the hardware that is there. It would be another way to go, but then you still need pretty wide threshold, like pretty wide acceptance threshold. Yeah, that is something we haven't, we haven't solved yet. For easy itself, you know, we'll probably have some fixed test infrastructure where we can actually know the performance right and then so we can still test the easy software stack. In general, but if you as an end user want to check, does the easy software stack actually perform as expected on my HPC system. Yeah, what's the reference for that you'll have to put that in yourself one way or another. All right, so some future activities. The bot is not finished working, but we can definitely do some refinements there. One of the things is that, you know, if one of the architectures fills the build right maybe you have seven tar balls and one is missing. You just want to be able to re trigger that one bill that is currently not possible. Also for debugging right now it's kind of a black box right if a contributor says oh I want to build this easy conflict. If the build system gets launched, it starts building the star balls. If something fails along the way, you don't have any way of getting to the law file or to inspect the files that are there. Because of course those build notes, you know, we don't want to be completely publicly accessible. So what we'll probably do there is provide a downloadable container that gives you the container that was used for building. You can see, you know, all of the law that were there at the time that the bill failed, you can see if you can get it going from there or figure out what the issue was. Also, the software testing is not yet integrated yet as a step in this in this workflow yet, but that is something we'll add. So basically, we want to launch reframe tests probably right after the bill for single node tests. And then in the staging step, we probably want to also run no female test to check if it works before we actually deployed in the production environment. For test week, we're looking to add more level level tests as well. So something like the also test that will definitely be useful. Because yeah, if your grammar fails and it's because of some communication issue. It's nicer if you can just check the communication separately. And now that we have a kind of sensible blueprint, we want to also expand on the amount of high level application test that we have. And as I mentioned, also the other thing you want to look at is how do we do this portable from that is not a soft problem, but we'll just try to see what works. We'll also be looking to expand hardware support. So now we're working on a video GPU support. I think that the sensible start, but we also want to check in the support. In the, in a more distant future, we'll be looking at risk five support. We will also give trainings. So again, this is something from the multi skill project as well. That has a training work package in it. The first end user training will actually be this may already at the event that was already mentioned earlier. That's organized by HPC now, right. And we'll also develop some training material for system administrators that want to host the software stack on their own APC system. So for things, we want to also support extending the easiest software stack with a local stack. So either on a local file system, or it could be your own CVS file system. This can be useful for proprietary software. It will enable you to do faster deployment. You know, the nice thing about easy is that it's going to have a pretty good QA procedure. But what will not be so nice is that it will take a bit of time right review and needs to look at this. And if, you know, if it poses issues if some test fails, you need to look into it, you need to solve it. And maybe you just want to help your user tomorrow, right, then it's nice if you have a local environment where you can just deploy it, whatever the quality insurance is. And then go through the QA of easy to make it available to everybody. Or maybe you just want to support a local software developer, and they just want to deploy development versions of their software on your local system that might be fine, but you don't want to push that into the general easy report. In multi skill will also explore some use cases where we want to see how can we use easy in a CI environment. So if you want to do CI on your own on your own software, you know, you first have to install all of your dependencies with easy that's actually quite easy because you just mounted. So you can have all the dependencies available in less than a minute. So we think it can be helpful in CI environment. And I think, yep, that wraps up the talk. I still have a demo. We have time. Yeah, so basically, if you want to install it, if you have a VM or whatever you can just install see if you invest, and then there's a second step to install the easy configuration. That will eventually be one step. Right. So as soon as we'll become a default configuration to see them as it just you'll install see even that should be enough. Then there's a step to activate the environment so you source a best script and that makes this module environment available to you and then you just load whatever module is in the environment that you want to use. And in this case I want to show you a to note run on our supercomputer. We have the easy environment available. We actually did not put proper caching in place yet. So one of the things you see right now it should not be a problem because it's actually cashed locally on the note. But the first time I'm on a new note and I do a module AV takes quite a long time because there's no proper caching in between. That's one of the things we want to explain in this system training like how do you set that up right how do you make sure that this feels like a local file system or network file system. So what you see here on the top I did an age top command on both of the notes that are involved in this job. This will take a minute. So if there are questions, we can cover the questions down and then hopefully by the end we'll see the results. Yes. Wait, do we do microphone or I repeated. Okay. So right now it's just, you know, two hands full of software that's it. It's not all about completeness yet. It's like we need to have this proper community contribution procedure in place. It needs to be pretty well automated because we expect more and more contributions will come in and if you start too early. You'll just be working on processing those community contributions rather than working on the automation. So we want to make sure that we're ready to process a larger volume of these. We don't understand well enough, you know, how do we, one of the things we're still deciding on is how much do we put in the compatibility layer that takes some experience to figure out what works what doesn't work. You don't want to make these bigger changes later on when you already have an enormous software stack deployed and a lot of people are relying on it. Now it's much easier for us to just decide. Okay. Next iteration, the compatibility layer is going to look completely different. Nobody cares. They know it's a pilot. Are you going to organize support? Are you a central organization? True. Oh, so the question is, how will we organize support for this, right? Because the local site, well, they know their system, but they don't necessarily know how the software was deployed and the other way around. The person who deployed the software in easy doesn't know local system. Yeah, that is tricky. That's true. But then I think we are in the end one community. So we can, I think the way we have to do this is to keep talking to each other. If we all work on the easy software stack like, I'll be back. Yeah. So yeah, so the question was on support, right? In the coming years, we have to know the skill project and well, I mean, our primary focus for support will be the other scientific work packages within the project. But anytime we have left, we're more than willing to spend on the community, because that's also where we learn, right? We learn what works and what doesn't work. But yeah, well, we'll have to talk to the system to understand the local system. So the question is, are the packages going to be exclusively installed by easy build or is this going to be our last easy build user meeting? I don't know if it's going to be exclusively by easy build for the foreseeable future. I think so. Again, since that is what most of us in the easy project have experienced with. But if there's somebody who says, look, you know, I have spec experience. I think he's great, but it should support spec. Yeah, I mean, we're very open to have that discussion and to see how we can support it. I don't think easy build will ever go away from me. I don't see that happening. No, I mean, so easy is not replacing easy build. Indeed, as I said, you know, you might have reasons to still install stuff locally. If you want to tune installations, if you want to use slightly different versions, if you want to use a development version. There's still plenty of reasons to want some form of local tech, right? And there you probably still want to use easy build as well. If you already have everything in your local stack, there's no added benefit, right? But that means you have to maintain a pretty big stack. Maybe, I think, I think the benefit of easy is that all of the standard stuff, you can probably use it from easy software stack. If you want something very specific, you'll probably install it locally. Yeah. This is too long story to repeat. Yeah. So one of the issues we currently see a lot with easy build is somebody comes up with an easy config file, works for them. Maybe it even works on another system they tested it on. On the third system, it doesn't work, right? Because of slash temp being mounted differently, not big enough. OS dependencies that are missing or extra stuff that's installed in the US that's accidentally picked up. So all of these, all of these things that different the build environment cost trouble. In easy, that's way more control because we have a very controlled build environment. We're basically all working in the same operating system in some extent. So I think that's going to make sharing installations and getting stuff to work a lot easier. And once someone has done it once, the other people just use it, right? They don't need to reinstall and make sure it works for them. That's a key difference. Yeah. So our marker from the audience is that it will work the same way everywhere, the same physical installation, right? This is 99% true. There are still different optimizations, right? And we have seen in the past, not within easy, but in other cases that optimizations can sometimes change the runtime behavior. Your code. Some bugs are specific to a VX 512 optimization, for example. So this can still happen, right? But in principle, yeah, they're built in exactly the same way. Numerical. Yeah. It's going to happen for everyone. That's true. Yeah. Yeah. So I mean, yeah, if you run it on the same hardware, and then you give it to your colleague and you run it on the same hardware, it should run the same. This is a difficult one. When is it done enough? That's always a challenge. To play with it is definitely good enough. Yes. Would it work in such an environment? So the question is, is it already worth trying to see if this works on my system, right? And in this case, the particular system with some security things in place, and you want to know, does it work there? Yes, it's definitely worth giving it a try already. It's not something that I would just give to users and say, you know, this is production ready. Go ahead. The module is there. Have fun. If you have more technically interested users, right, who are into software development themselves and they like something new and they want to give it a try. Sure. But yeah, not just for regular production run yet. There's like a getting started part in documentation that explains you how to get access. Either a native installation of CVFS or using our easy client container and then how to run the demos like very close to what customers show. It's definitely ready for that. And yeah, please try it out and let us know if you run into any problem. Anything that any problems that are raised now, we can be aware of and try to fix it in the next iteration. That's sort of the reason why we, there's multiple reasons we have this pilot repository. One is to show to ourselves that we can do it like what the Canadians have been doing. That's one thing. Another thing is to let other people play with it. But it was also a way to actually get funding and tell whoever was listening to us like we can do this. We have a handful of software and getting the software and it's not going to be the biggest problem for the automation community. This is an excellent question. So policy is something that we still have to decide on. I mean, it will be great. Right. If we have a reframe. Oh, sorry, repeat the question. Yes. So the question was for community contributions. Do we need a reframe test for all of the community. And yeah, so I mean, it will be great in the ideal world. I would say yes, this is a hard requirement. But that also makes it a very big barrier for people to contribute. Certainly they need to know easy built because they need to create an easy conflict. If it's not there yet, they need to know reframe. And they need to know the particular way that we do portable testing. Not the standard. So that is, I can imagine that's quite tricky. So I think it's more realistic to say, we'll try to stimulate it as much as we can and make it very transparent. Which software is actually tested with reframe tests and which isn't make kind of a support matrix and visualize that. I think it will go in that direction, but you know, the jury is still out on it. There's a question. There's a question in Slack as well from, it's a funny one because it's from Thomas who's involved. He's asking there's lots of development activities, but how about activities to put in easy to easy to use any plans on rolling it out to end users. Good question. Any plan? Well plan and in Gantt it's already easy is already mounted. So it's available if people know what they can play with. And that's the case I think in some Norwegian systems as well. I think it's the case at surf as well. If you know it's there, you can play with it. You can probably ask support for questions and they're going to be annoyed if you do. We actually also have a cloud environment at surf. And we're also going to roll it out there. So there that will just get a template VM that has the easy stack mounted and give it a try. I personally, I would if I see a reviewer coming along on our system that I think your technical, you know, you could find this interesting. I would like to give people a nice to give it a try and see how it goes. And I think we should also organize because we think that this makes it much easier to move from one system to another. We should really try to do that, like ask a user, have your use case on your local SPC cluster, see how much time it takes. You can now move to our system with or without easy. So that is something, you know, I've in the back of my head, but I still have to find the right user.