 Tässä on se, että haluaisimme tekemään teille, että ... ... ottaa. Yksi asia, että en olemme muistuneet, koska se on ... ... aika paljon kuin ... ... maailman, mutta ... ... slohman, että kyky on ... ... tämä asioiden ... ... koulutus, johon ... ... jälkeen, johon ... ... koulutus, johon ... ... useita. In CSE, like just mentioned, they have billing units that basically tell how much virtual currency you have to run inside of the machine. In Aalto we don't have that kind of like a cost attached to computations, but there's like this kind of a priority cost in the slurm itself. Basically if you run stuff in the queue it will lower your priority so that other people have there, they can run stuff as well. And how do you usually want to do it is that you want your resource request to match what you actually want to do. So that then the slurm will, like slurm will assume that you will use the resources that you are. Like when you're queuing slurm will assume that you will use all of the resources you are requesting. But after the job has finished your, like your cost is calculated based on what you actually used. So let's say you put like a 10 hour job of 100 CPUs running your priority. You go through the queue and your priority will be because like you have this huge job running there. And because you're basically like using already a lot of resources. But if that code finishes in 10 minutes or like in an hour when you requested like five days you will, the priority will jump again. So you will get like this wave emotion into your priority calculation. So you're not actually like utilizing the resources efficiently. So you will be penalized because oh you will get less stuff through the queue if you have bad resource. Like if you have missed the allocations are not specified correctly. So the more you run the less you can run in the near future. So that tries to make a dynamic system to balance everything out. And in the end the more efficient your stuff is then the more you can run long term or something like that. Yeah. Yeah. Okay. Yeah, but do leave us feedback. So hopefully you enjoyed the course and hopefully you get your stuff running. Yeah. Yeah, I mean it was great sharing this time with you. Hopefully you're inspired about the kinds of things you can do. And maybe the most important thing to get out of it is that we are here for you. Like you're not alone. Yeah, it's all about being on the like spectrum of different scientific computation. Like nobody is like, in Finnish they're saying kukaan ei ole seppä syntyjä sen. So nobody is like a smith when they're born. Like nobody like is a professional when they are like when they start something. Nobody like becomes a master of things immediately. And that's why we are definitely no masters in like no masters of the universe. So it's important to realize that okay, I'm here. I can do this. I want to do this. I want to move to us this direction and make use of the resources available to you so that you can get the best out of the systems and best out of the like learn the most while you're working. While you're working on your project. Because that's a very university. And in ways like you're supposed to learn things here. So try to utilize the resources and the materials to the best experience for you. Well, let's see. Are we getting any more questions coming in? Ah, here's a question. S run.m go. I think this is University of Helsinki PGPU. So I think you are asking for the partition, but you're not asking for the resource. So I think the error is because of that, I would guess. But this last line to me makes me think that NVIDIA SMI, the command cannot be found. Like the exact call of NVIDIA SMI itself doesn't work somehow. Yeah, this is something you should discuss with the support staff at Helsinki. Yeah, I think we have tried to make it a bit general. But of course we see it from our point of view, because we deal with our system constantly. So we don't often, we don't know how everything looks on the other side of, well, in different places. But we, like the generalities are that usually you need to request these general resources. Either this with this grace flag or maybe in the future with the upcoming like GPU flag from Srum. But basically you need to request for the resource. Then there's specialities for different sites that are, of course, well, they are different for each site. But like anyways, like I would say that it's a good idea to check or like if something, if you felt like you understood what we were talking about. And you look at documentation in your own site, outside of Finland, for example, if you have, if your site is somewhere in completely different place, you might still like try to find the general information that is that this is how they are usually handled. So the flags might change, the names of the systems might change, stuff like that might change. But yeah, usually stuff is very general. And like, like what you said earlier, once you get into the habit of using these HPC systems, then skies to limit usually like you can start moving to different directions using their different systems and translating your code to different systems. It might require some time to get the correct flags set up for different systems. And you might have different scripts for different systems. But usually it's like, like once you know how to do it in one system, you can translate the information to other places as well. I just got some pictures of my setup here. Okay, anything else in HackMD? Okay, well, maybe we should say goodbye. And thank everyone again for attending. We should thank all of the instructors on day one and three. So you see Ancovara, Enrico Glirion, Samantha Whitkey. Who else was there? Okay. Yes, and also remember to check out like different affiliates like Code Refinery and CSC and all of the different training material like usually like if somebody tells you one thing, one way, somebody else can tell it another way, which suits your style of learning better. So keep that in mind and look for information from our affiliates and people who we work with because they do a good work. Okay, I guess we can hang up. So see you all for our next course or in a garage or maybe never. Bye.