 Okay, we are back so we will show a quick demonstration of NVIDIA SMI and the output. So here I ran srun gres equals GPU NVIDIA SMI and then we see this output. So it shows ... Yeah, so what you see in this output, it might be a bit cryptic to see, it's the output of the NVIDIA SMI's this monitoring tool that NVIDIA has provided and it will show you what's basically in there. So you will see here that there's this GPU 0 that is one of these P100 GPUs. It's currently, well, the memory is 0, GPU volatile utilization, so that means how much is calculating it's at 0, because that is the GPU that has been reserved for this job. And underneath there you can see that there's no processors running. Well, there's no processors running because it's only running the NVIDIA SMI, it's not actually utilizing the GPU. But you will see that there's this one GPU that's been reserved for the job. And these nodes, however, have four GPUs. So whenever you submit a job, you will get your own GPU that you have seen, you see as the device 0. If you ask for two GPUs, you will see them as 0 and 1. But basically, you will get your own GPU that's only for you. You can only see your own processors on there and you can then we'll see what's happening. You can also, when the job is running, you can take an SSH connection to the node and use this NVIDIA SMI tool to see how the GPU utilization fluctuates. It usually jumps from 100 to 20 to 100 to 20 or something like that, especially if you're not utilizing the GPU properly. But basically, this is just to demo that you will get the GPU with the GPUs. If you try to run the NVIDIA SMI without specifying the GPUs, well, you end up on a non-GPU node and it won't even find the command. But if you run it in the GPU partition without specifying the GPU or with the Gress like Richard does here, it will say that no devices have been found because, well, you're not asking for a GPU device. But basically, just by adding the Gress, you will get a GPU device. Okay, let's move forward to the... Yeah, so next up is parallel.