 Okay, so this was embarrassingly parallel, so each of these tasks was completely independent of each other, but when people mentioned parallel computing, they often mean programs that utilize more CPUs at the same time, and this is what we said at the start of the day, like shared memory parallelization is parallelization that happens in one computer. So one computer, multiple processors, multiple processors running on those processors, and that sort of things. You can hear words called threads processors, that sort of thing, like they are all shared memory parallelism, and it's very easy to run this in the queue, so you just need to specify this CPUs per task. Don't worry what is a task, we'll talk about it in a later, but nothing else is needed, you just specify the CPUs per task for your jobs, and you can get multiple CPUs. So maybe we should just jump to the first example. First, is it this one? Yes. So it's the pi.pi that can run with multiple processors, so it can run with multiple processors, and there's an example how do you, like first with s run and then with s batch. So it's quite simple, you just ask multiple CPUs, and then slowly know that okay now this, it needs two CPU slots basically this program. But the important thing here is that when Richard is writing this, you notice that he needs to tell the pi.pi to use multiple processors, because the program itself needs to know that okay I'm now going to use two processors, because otherwise it might get like a mismatch, like it might run too many processors or too little. So you want it to be that it's the same. So if you have reserved two processors, you want there usually to be two processors, so that like two CPUs. If you reserved two CPUs, you want there to be two processors running on those CPUs, because otherwise they will have to like swap out, and then that will create like a. Like two common things, someone takes a computer, a program designed for desktop computer that it says, well I'm on a node with 40 CPUs, so I'm going to try to use all of them, but then it's only allocated two of them, and it's super slow. Yeah. So let's try running this, it's basically the same thing as before, but now we're using two processors. Okay. Time up. It's an equal sign. So that was pretty fast, so it even said it's using two processors. An important thing, especially when you're running these multi-process things, is to monitor the efficiency. So we previously, yesterday talked about efficiency, so if you want to reach out around the SF command on that job ID. So we used this SF previously to check the, so here we see that the CPU efficiency, well can you increase the number a bit, because it was so fast. Let's see. It says 50%, so let's go to iterations. I'm making it 10 times more. Yeah, so it takes a bit more. Okay. Yeah, if you now run SF for that. Yeah, so. Now it's 80%. Yeah, so it's in this toy problem, it's like you need to have a larger sample size to get it. Like the startup of the whole parallel thing takes some time, so it's not getting full like 100% utilization. So it's not using all of the processors all the time, but that's to be expected. Yeah. If you want to quickly show the slurm script, there's a few mentions there. So there's this extra line here. There's srogan CPUs per task in the middle of it. So if you're using srun in your code, like if you want to get the extra monitoring information for each job step, just put this here if you want the srun step to have all of the CPUs. Like slurm has this feature that you can like put, give different processors different amounts of CPUs. So you can launch like one program that uses three and one that uses four or something like that. And usually you don't want to do that, you want to give all of your resources to your program. So either you can not use the srun or you can use srun and just put this there. Don't worry why it is this documentation there, why is this technical reasons. But it's usually a good idea to just add it just to make certain that the slurm is at the same page as what you're trying to do. Okay, is there anything else we need to talk about here? Or should we go? I think we can go to exercise. And if you have questions about is my code parallelizable or this kind of thing, put it into the notes now. We can then discuss it after the exercises. So I guess our upcoming plan, there's 15 minutes for exercises, then 10 minutes break. Then we have a guest presentation for half an hour or so. And then we'll come back and continue with more stuff. Does that sound correct? Sounds about right. Okay, so we will head off to the exercises. I'll scroll down to show it. And we'll write it in the notes. Okay, great. See you soon. Bye.