 Hello, we're back. So Seema, what's the plan for the rest of the time? Yeah, so we'll quickly go through MPI very quickly because like we were talking about like during the breaks and everything, we're talking about MPI and MPI is very important. So for example, in like Lummi supercomputer, whatever, if you want to leverage the thousands of CPUs and thousands of GBs, you really need to like use MPI codes. But there are a lot of people who don't do that. And it depends like which one, which kind of problems are you trying to solve? It depends on what sort of things you need to learn. And MPI is very important. So you should know like that MPI is important if you're planning on doing like this kind of like mass parallel computations. And then you should learn about it. But I think for everybody, it's good to learn few things so that you don't mix it up with the shared memory parallelism. Because like, like that's what we're going to like do now. So we'll go quickly check the MPI, like how we remind ourselves how MPI programs are constructed, and then run a simple MPI example. And then if you have interesting, I recommend checking the full tutorial. And there's a lot of other materials as well. And if you have any problems with your MPI codes, put them. You can come and ask us. But let's go through that, because then we can get to the GPU part, because I would guess that a lot of people have been waiting for the GPU part. So let's just like plow through the MPI. I know that everybody's tired. And sorry about that. But yeah, let's just quickly check this. So MPI. So in MPI, again, it's a collective thing like you and what we said previously, or MPI programs, they are designed around MPI. So if your program doesn't use MPI, it doesn't use MPI. And it's like libraries that allow different tasks to communicate with each other in different nodes. And the word task is like important part. So SLURM thinks of these individual workers when we allocate like 20 CPUs with MPI things, it thinks of these as tasks. So you saw previously like CPUs per task there. And you might have thought that why is there a CPUs per task here. What is a task? Well, task is an MPI task, because like SLURM has been built around MPI really, like the whole Q system is being designed so that it can run these massive parallel computations. So that's why everything is a task. But most tasks are like have only one, most jobs have only one task. So one MPI task. So don't care about tasks. If you don't use MPI, you will always want to have only one task. But if you use MPI, instead of reserving the CPUs, you reserve these tasks. And these are like each task will get its own resources, and they will work collectively. So we can look at the example here. Yeah. And I realized that this picture isn't quite right. There would be seven separate boxes in each of these nodes and there's arrows between all of them. But okay. So, yeah, more like this here. But are we looking for the example? Yeah, let's quickly look at the example. So let's, if you scroll a bit down. So first we need to, like MPI is usually that you need to compile the programs. They are like Python for MPI. So MPI for Pi and Julia, MPI and all kinds of things that allow you to run MPI programs and like Snow and all other things for other, so that you can run like MPI codes in other languages. So you don't have to think about this compiling. But this small example compiles this MPI program. So if you go to the slurm directory, I think you need to be there. Or you can, yeah, set the paths differently. Yeah. So now if you load this open MPI module, in other clusters, you might have different versions of MPI installed. But this MPI is usually like, it's linked against all of the slurm things so that the slurm knows how to allocate all kinds of stuff. So, so yeah, if you try now the MPI CC, I don't know what module did you previously have? Oh, that's a good question. Yeah. It doesn't really matter. Yeah. And is MPI some wrapper around GCC or some compiler that adds in the MPI libraries? Yes. Okay. And now you can run it with multiple tasks if you look at the example. So this does the same kind of Pi calculation, but it uses MPI for this. So one node, two tasks, which means there's two processes that will be communicating, I guess. Yes. Yeah, we'll get the default value. Somebody asked in the notes what are the defaults and they depend on the site but in Triton you get 500 megabytes of memory and that sort of thing. Yeah. Should I try timing it? Don't put it there because I think slurm will run two copies of it. Yeah. Yeah. So when you run these MPI programs, you start basically the same program multiple times but then slurm can handle the sort of the programs know about each other. So it's a whole mess of technologies. There's descriptions here. But this is like, here we get two individual workers working in the same machine but they could be in a different machine and they would know about each other and they could communicate with each other. And this is like for these collective codes. So MPI is used for these kinds of things. But I think, yeah. You have to, yeah, so here to, yeah, okay. Yeah, but if you're not using MPI then don't worry about it. Always put tasks to one. Don't worry about it too much. So don't fret about it too much. And that took even less time for more iterations. Yeah, it's probably depends on which nodes you get and which, yeah, like it's C codes. So basically all the time here is being spent in setting it up and almost no time in running it. Okay. Yeah. So in this kind of situation is quite. In the back scripts you have to use S run here. Is that correct? Yes, that is because S run. Yeah. So S run, so S run has some magic stuff inside where it says, okay, I know this is an MPI program. I'm going to start two copies of the same thing and tell them how to talk to each other. And this happens magically in the background with MPI that we don't need to worry about. Basically yes. And if the magic breaks it's on us basically. But yeah, like this is like how you run like basically MPI programs. And they are like other tools like I mentioned like higher level languages that can still use MPI like MPI and stuff like that. You can learn about it more later if you're interested. But for time sake, let's go to the GPU chapter and say if you have any questions just put them into the, into the nodes. So the nodes show no questions. So yes, let's go on to GPUs.