 Welcome back. So we have just gone over these examples of the Interactive jobs. So hopefully in the breakout sessions, you were able to experiment with some of these But now, Seema, would you like to Talk about what will come tomorrow? Yes. So tomorrow we'll go, yeah. Tomorrow we have plenty of stuff ahead of us. So all of these next steps are basically variations of the same Same same motif. And the motif is a non interactive users. And this is the thing basically why clusters and supercomputers are used in the way they're used, because like if you think about it, You are human. You have human reactions, human capabilities, human processing capabilities. If you need to be managing everything In a terminal window somewhere doing something like taking remote connections and managing different computers, you have a limited capability of doing that or you will have to do it yourself. It's basically like if you're a telephone operator in the olden times, like connecting stuff, there has to be somebody doing that. But in a modern world, that's of course not being done. There's automatic ways of doing it and the humans are the ones overseeing the whole process. So the stuff is done in the background non interactively. And you will just oversee the jobs, what kind of tasks you want the computers to do. And this is the idea tomorrow. So we'll be doing the serial jobs. So the serial jobs are basically like you do something non mean non interactively on some node and you will read the results afterwards. We will be using these array jobs to launch a huge bunch of serial jobs. So when we want to learn same kinds of jobs, but with different parameters, for example, different seed numbers, something like that, we might want to use the array jobs to launch multiple jobs. Then we will do GP computing if you want to do these non interactive jobs with GPUs on them so that you can access the powerful accelerators that fragile has. And then we'll do parallel computing where you can launch either these huge MPI jobs that basically utilize multiple computers at the same time, or the serial jobs that utilize multiple processes at the same time. There's a bit of a distinction there, but we'll get to it more tomorrow. But for example, let's consider just a quick like big heads up. Let's continue consider our Python job, the memory hook program. We could instead of writing this s run script like this. We could write this kind of a script. So I'll go back to the work directory and copy this copy this name of the program. We could write this memory hook script that does what we previously did interactively non interactively. So we write this s script. And then we start with that. We tell it it's running through bash. And we enter some basic the same slurm options like we tell it how long it will take. So here one hour and how much memory it will need. And there's a lot of other things we could add to if we needed. And now we do whatever is needed. For example, Python, and then this. And then the amount of memory. And now we can save this. And then we can run it with as batch. So as batch says, run this asynchronously and then give me the results when it's done. So if we do slurm Q. We can see. Okay, so it sees that it is still pending. Understate there. And now we see it's gone from the slurm list. So it's done. So we can less it. And we oh, here we see an output file. So instead of printing the results to the screen, it prints it to a file. And then you can look at this file later and see what has gone on. And that's basically the general idea here. Yeah, it's basically like codifying the requirements and whatever we're doing to get the code running. So of course, like in this case, we are just running one command. So why would we need all of this machinery? But if we combine this information to the previous steps that we described so that we could load some software, we might need to set some certain modules before we launch the job. We might need to go to a certain storage directory, maybe do some pre processing in the storage directory. So we need to specify the work directory where we're going. We also need to specify some requirements and let's say the requirements are something that we don't want to always remember what we typed in the command line. We can codify that all of those into the one script so that the whole process of setting up the job, running the job and getting the results is done non interactively. And in a way that's already documented basically. You have already documented what software you're using, where are you running the software, what software you're running, what kind of requirements does the software have. Everything is encapsulated into this one script and you can share it with your co-worker and you don't have to give them like, okay, I think I used this module, I think I run it here. I think it requires 16 gigabytes but you might want to try it out. No, you don't have to do any of that, you can just write it into your script and you can share the script and the co-worker can see that okay, these are the parameters that the other person used. I will use the same parameters and it's a much easier way of working and you don't have to think about keeping a shell open or anything like that. You can put the job into the queue and go drink coffee and read a paper or something like that. Yeah, so that is what we will do tomorrow. So we hope that today has been interesting. Please remember to give us feedback about good or bad things, open the HackMD link which you can see are under the stream and scroll to the bottom and write something there, especially please give us feedback if you don't like something about this course. This is a rather experimental way we've been doing things with Twitch and Zoom and all of that and we don't just want to hear good things, we want to hear the bad things so we can improve them. So with that being said, I guess we will disconnect now and see you tomorrow. Yeah, thank you everyone.