 So welcome to the kickstart course. We're starting our ice breaking stuff right now. Yeah, let's see. So what does this mean? Well, we have 10 minutes to sort of get used to things, make sure everyone's connected all right, and so on. You see the course page shared right here. The most important thing to note is this hackMD thing that we shared with you if you registered. So if you open it, it looks like this. So you can switch between an HTML view and a edit view. So instead of using chat, what we use here is this. So that way you can ask a question any time. And then we can answer sort of asynchronously. And you'll see how this works pretty well. So if you scroll to the bottom, you see questions and answers. So just like this. So here someone is asked, we don't need Zoom now. And that's correct. So our strategy is that Zoom is for exercises. And the live stream is for the teaching and so on. So we'll go and we'll be talking and so on and say, OK, now at this time, please, we'll do some exercises. Here's what it will be. Go to your Zoom and then work on them. Of course, you don't have to use Zoom. You could be following along yourself. You could do it yourself. You could do it with your own team in a physical meeting room if you're like that, whatever. It's also OK to be watching the live stream and see what you learn and then join whenever. So yeah, here we see a demonstration of the answer. So it's appeared here live. The most important thing to note is that don't put any names on here or any personal information. This is public. It's saved. It could be broadcast and recorded. Let's just not have anything there, except the names of the staff. So we start off with Icebreaker. So what's your background and why are you here? So maybe I'll go to the Edit Mode. I will scroll down and what can I scroll? So we need some places for people to write. And let's start. And I'll start writing this. So yeah, please fill in, see if I push this button. It appears. And you could also use this to give initial feedback on things. So how do you think, like, do you have any special request for the course? Do you want any comments on the schedule? Whatever, like that. Is there anything that's missing from the schedule that you would really like to have added? I guess you mentioned when you're done with this editing, you can click back to go View Mode. Because sometimes it is more than 100 people writing at the same time. It might get a bit slow, but it's hoping quite well there's 100. Test 1, 2. Anyone on the stream is my audio better or worse now? I'm not saying it louder. Yeah, switched microphones. OK, so what else, what other preparation do we have? I mean, I hope that everyone's as excited here as we are. We had, how many people did we have registered in the end? In the end, we were 260. 260, OK. Quite one third from Alto and then many, many from Edson University and Tampa University. So it's nice that we are having a nationwide community of people who want to learn about scientific content. Yeah. I was hoping we could break 300 people, but we aren't quite there yet. And I guess, yeah, I guess it's important to note you don't have to register to attend this course. Feel free to send the live stream link to whoever may want to attend, post it publicly, all that kind of stuff. The Zoom, please register for that or at least don't send it around to, don't post it publicly. But anyone who would like to follow is by all means welcome to find our thing and come. It's really nice to see that so many people are working with so different things. It's totally, this is also what I like, scientific computing because we all come from so big, different backgrounds and very diverse past and interests, but still we find the common need that brings us together. And it looks like people have got a good idea of this hackMD. Yeah. We can get some questions. Someone was writing that I'm not loud enough. I can stream. Although it looks like hackMD is already having some issues here with all the simultaneous edits. Maybe please switch to view mode if you aren't actively editing. And I think it will get better. And if some of our hackMD specialists can go and remove some of the blank space in between here. Well, I've seen this happen before. OK, well, it's about 12 o'clock, so maybe I should begin with the introduction. Any objections? Go ahead. OK, so yes, so as you know, here's our course webpage. In the schedule, you can find where we are now. We're at the general introduction, which I have open here in the presentation mode. Yes. So welcome, everyone, to this Kickstarter course. So what's this course? So day one is sort of a big picture day. We talk about high performance computing and some other practical skills. It's mostly lectures and demo, so you're sort of here watching. And it's quite generic, so most of what we cover applies to any university you'd be working. Day two and three is more specific, so it's about using a computer cluster. So traditionally, this would be called the HPC, or high performance computing cluster, but we're not really getting to the high performance computing part, we're getting to the computing part and being able to, say, run one thing very many times and so on. And we use the Altar University cluster Triton as an example, but what we say will be applicable to others. So if you see the examples here and you can probably work through them on your own cluster and then you can be very well equipped to adapt it to your own sites cluster. So who's attending? So we have multiple universities here and we're all using different clusters. But like I said, we will be very careful about explaining things, so you'll be able to know what's specific to us and what's applicable to other people. So who's here? So we are ultra scientific computing, also officially known as Science IT. We are running this course and we're running it in collaboration with other universities as part of the Finnish computing competence infrastructure, which used to be known as the Finnish grid and cloud infrastructure and before that, the Finnish grid infrastructure. Basically, it's the network of universities that do scientific computing. And it's the teams that support it. And there's one talk in collaboration with CSC that's later today. So some practicalities. So yeah, we already talked a little bit about how things work, but why are we doing this Twitch thing? So our point is that we have a livestream broadcast like a TV production, which means that anyone can watch, which is something that, well, people don't really do with Zoom meetings. And we have process for exercises. So it's like a commercial break. So then at that time, you go to your own groups and you do things. So we have a Zoom for the people who registered from Finland. And in here, there'll be different breakout rooms where you do the exercises and you can get help depending on your university and with different people and so on. So with 260 people registered and more than 200 watching the stream right now, it's going to be difficult to get as much of personal interaction as we would otherwise. But we'll do the best we can. It's better to say things than not. And you'll learn about how you can get more one-on-one support later on. So within the workshop, there's different things you'll experience. There's talking and demonstrations, which are on the stream here. Then there's type along where we'll be doing some task and then you can type and follow. There's exercises, which you can do different ways. And there's breaks. And we'll basically be following along, tell you what's going on, try to make it clear. And if you ever get lost, if you open the HackMD and scroll to the bottom, you should find information about what we are currently doing. Let's see, the chat and communication. So yeah, please don't use the Zoom chat for questions. There'll be too many for us to keep track and they'll get lost. And even if we answer what we write, we'll just go into nowhere. No one can ever see it again. Instead, we have HackMD. So here, there's indications of the sections we're on, the parts of the sections. You write questions in bullet points. There's answers more down. There can be multiple answers per questions. And to ask new ones at the bottom. I think most of you have already found how to switch between edit and view mode. Please try to stay in view mode if you're not actually editing. We think that helps makes things a little bit more clear. Let's see. So it's important to not get overloaded here. If this is anything like our previous courses, there will be a lot of information in this HackMD. People will be asking so many questions. So try to leave it open, but realize the information isn't going away. You can go back and read it later. So thank you. You need to read the answer to your question right now or should you come back later? And often it will be come back later. And like I said, in the icebreaker, the HackMD is completely public. Anyone in the world can find it if they know the right thing. And there will be times it will come on the screen here. So don't include any names or anything there, ever. In other workshops, you would say if you need personal help, write a breakout room number and someone can go there. So the icebreaker we've already been doing. OK, let's see. I've already talked about where do you focus because there's just so much information. So the screen share and lecture is the most important thing. And then there is your own type long. And then there is the HackMD, which has things. So HackMD is more like a passive watching thing. And then, of course, there's the lesson web page and so on, which you'll be following as needed. So the screen arrangement. So you may notice we have this really interesting portrait style stream. So that's because we realize that the landscape doesn't really help people that much. I mean, you're not watching a movie here. This is a interactive workshop. You need a lot of screen space to do your own work. So for that, we realized to set a vertical share. So you can put it on half your screen. And the other half, or actually slightly more than half, is available for you to work. And then after a lot of effort, we optimize the right size so you two would not downscale it and so on and so on. This example here is using Zoom and not Twitch, but the same thing applies to Twitch. If you're watching from Twitch itself, you can put the view into theater mode and it will scale like this a little bit better. Or if you're watching via our own other site, then it should be ready for you already. And then you have plenty of space to share or to have your terminals where you're working, the web page, maybe HackMD open in another window so you can follow it, and so on. So this will get a little bit weird for a few talks today that are based on slides that are landscape style, but we will do the best we can. Yeah, so there's more than one way to take this course. So first off, we have far more material than we can possibly cover. So we adjust to the audience. So if we don't cover everything or we say, OK, we're not going to do this, that's normal. Don't panic. Don't think you're missing something. All the materials available for self-study later on. Also not everyone needs to take the same path here. So some people may want to just watch to figure out how this cluster even works, like what will I be doing in a few months or a few years, and then you're ready later. That's fine. Some people may be active and try to do all the exercises. And some may be advanced and want to do even more exercises and go further than we go. All of these are fine. You can adjust to how you would like things. Oh, I guess I didn't mention here that everything is recorded and will be put on YouTube. So you can follow along later. So it's fine if you miss anything now. So we're a community here. So please try to be respectful and helpful to everyone in the course, especially your fellow learners. So there's four little pieces of advice here I can mention. So one, everyone's at different levels. That's OK to be expected. Everyone will be both teaching and learning. Take some time to ask each other how things are going, like, do you need help, and so on. And when something isn't going right, speak up right away. If you can't hear us, if the screen's too small, whatever, let us know right away. OK, some final notes. OK, here it says recording. So this course will be recorded and put on YouTube. Because of the way we're doing it, there's no risk for any of your information to be on YouTube. It'll just be us, the instructors. So we don't give you credits for attending this course, but there is this online course hands-on scientific computing that covers a lot of the same material and will allow you to get credits. And then please join us. We are staff at Alto, but anyone here can help us in supporting the computing for the whole university. So with that said, we're about to get to our next talk, which is, let's see, it's Yvonne and Simpa, who will be giving a crash course to computing. So are y'all here on the instructor call? Yvonne and Simpa are part of. No, we can't hear Yvonne. So Yvonne has worked for Alto scientific computing for many years. Can I hear me? Yes, now we can. And Simpa has just started last year as a HPC specialist. So yes. So I'm starting to share my screen. Yeah, so from my side, I was on the students part. So I was actually thinking that the students are supposed to listen to us, not the instructors. OK, so good to know. So anyway, so here we go. Let me do the screen later. So the crash course. So we'll have kind of half an hour to tell you everything what we have learned for the kind of 20 years or something. So it's a kind of itself. But let's start. So first of all, the computer resource is this famous inverted pyramid, which is kind of shown to everyone practically when you start doing something. Yvonne, first of all. Yvonne, could you make the slide full screen, please? Like this. So first of all, when you start thinking about computing, probably the first thing you do is that you install something like Python or any other kind of application on your own laptop or your department workstation or to Ubuntu. And at some point, you realize that you don't have enough memory or your simulations are not fast enough or just the available resources are not enough to do anything better. And you don't have access to the fast storage and you don't have access to the shared applications So the next step you usually do is you go and looking for the servers or any other performance, any other more part, can be around. So that could be your server at the departments. That could be something which belongs to your own group. Usually one computer, maybe a few of them, could have sufficient amount of memory, even somehow amount of CPUs or CPU cores, which satisfy you for a while. But then at some point, you realize that that's not enough either. And so the next step is where we will be mostly talking about here at this course that's going to be the kind of university level clusters. And when we're talking about university level clusters in Finland, we mentioned FGCI or FCCI. So FGCI was kind of a name for the project before the FCCI. But if you hear something like FGCI, so that's the Finnish Cloud Competence Initiative or Great in Cloud Initiative. So it depends on how old documents you're reading out or web pages you're reading out. And here you have already you're getting something like thousands of CPU cores. You have lots of memory and half GPU cores, which are more expensive, way more expensive such as single laptop, for instance. And then the level up, which will be covered to some degree also during this course, is that when you go for the real mainframes for the real machines, which can provide you with the extremely massive CPUs and GPUs. So in Finland, we are talking about CSE and then CSE is representing, for instance, one of the praise partners over here. So these are the kind of steps up from the very bottom. But why we are talking about supercomputing is general. So it's to give you the motivation, why you're supposed to do that. So why you think, first of all, well, it's last on my slide, but it's the first of all, supercomputing is often the only way to achieve the specific computation goals. It's in time. It's in terms of problem scale. For instance, if you have been able to compute, let's say you are talking about the modeling some molecule, then one scale, like a smaller molecule, would require that much, not that much memory, not that much of CPU time. But when you go up to the huge protein and you still want to model it properly, so that's what is called the problem scale. In common nowadays in the world, you're already getting the, if you're doing some kind of AI or a like, so you are thinking about how to do it quickly. And this hardware reason is one of them. So you are getting access to the fast storage, to the fast communication channels in between the nodes if you want to run something in parallel. But on top of that, you're also getting access to actually centrally installed and maintained applications. So you are getting access to the proper environment, which is configured and unique. I mean, in the sense that one node is exactly the same as the other one. And on top of that, you are getting access actually to the community. If you are having problems on your laptop with setting up some software, it's probably gonna be your own problem. If you are having problem on the Triton with installing the software running something, then it's gonna be actually a problem of the support team which will come and help you. An HPC cluster in general, I'm not naming here Triton or any other, but just a general representative. So it's still the compute nodes connected to the same network and connected to the storage. If it's to say something in natural, that's the phrase you have to take out. I mean, if you want to something about this cluster resource, that's the message to take out. So, but the compute nodes are nothing else if you ever look at the inside of your desktop or laptop, you probably notice that there is still the motherboard with the memory with the CPUs and with some other interfaces like GPUs and the like. And compute nodes on the cluster are nothing else than exactly the same standard PC architecture. The only difference is that they have most probably not one by but two CPU sockets or four CPU sockets. And so they can provide you nodes, six, eight cores, but they can provide you 24, 40, or maybe 80 cores depending on what kind of, how many sockets you have and what kind of CPUs you have installed on top of that. They may have installed something like accelerators. I will talk about this a bit later, but we are mostly talking nowadays about GPUs. And then the storage, storage we will touch it also later. And the point here is that it will be common and cross mounted to all the compute nodes and which makes our environment more or less homogeneous. And the network, network will also touch later, but just a few words, it should be something which is fast and reliable enough. And I can tell you in advance, it's not your standard one gigabit Ethernet or not talking about the Wi-Fi, which I usually using when you're running your, when you're having your laptop or sitting just in front of your desktop. And the login node need to mention is that this is something which is just normal compute node with one external interface which is connected to the public network. And so whenever you go to the triton.alpha.fi, you go just one of the compute nodes which has one external interface. She's able to communicate. It's kind of play a role of bridge between the rest of the world and the whole cluster. So that's the one slide about HPC cluster. And then I skip this one, but if you wanna get back at some point, if you hear something in the lecture, so here are some notations which I'm using. And the real world example, which is already something like 10 years old. So we are giving these courses already since 2013. And actually for the very first time, I've been using these slides already there and I still keep these slides because I like them. So that's to impress you to make sure that you know you understand what is it about. When we are talking about the real huge HPC installations and here is an example of the K-computer in Japan which has been already utilized but which has been in surfed with several updates for almost eight years altogether. And here you see that that's the guy which has its own building. That's the guy which has distribution over the floors. Like for instance, you have the floor dedicated to the computation resources. You have the floor dedicated to the storage and then you have several floors dedicated to the cooling. So yes, these kind of computers that produce really lots of heat and they consume really lots of electricity. And just to say you if you were numbers like for instance for Triton just for simple Triton which is relatively of moderate size HPC resource we are paying more than 100,000 years per year just for the electricity, imagine. And then still that guy in Japan, it has been equipped very well. It's ready even for the earthquake. Here are some pictures from over there. And then that's the whole thing took actually to make you sure that when we're talking about the really huge HPC resources they do require lots of maintenance. They do require a lot of efforts to build them. They do require a lot of willing to plan them and to invest the money. So that's the kind of nuts. So I would say it's kind of toy for the big girls and big boys in a sense that it does require some efforts. And here are some local examples. So the one on the previous slide was the Japan and in Finland they are definitely talking about the CSE and CC will be presented just in a few minutes. And then here are the GCI sites essentially at Alter, it's Triton at Helsinki that's here are the names like Kalliuko, Warner. So in Helsinki not, there is no single cluster like we do it in Alter but the installations there are done separately. And then in Tampare there is Narvi and then actually each university which is part of the SCCI consortium and SCCI as the old name. So it has its own. So if you in depth where do I supposed to compute? I mean, not I but you where do you supposed to compute? Then you just ask around what kind of cluster you have at your university and then if you are not satisfied then you go to CSE and ask them what they can give. And then I told already some and so just so kind of a few summary on this stage that latch memory calculations. This is what can provide you from the HP series source GPU computing. You can do the massive serial runs. So even if you are not kind of a fan of the or even the expert on the real parallel applications still running the thousands of runs which makes it possible for the real massive serial runs. So it's so-called embarrassing parallelization. Then definitely intensity if you have some something to work with the data and then IO intensive runs they can be done on the, I keep speaking just but we supposed to be kind of conversation. So simple if you have any questions just ask. So I go next for the explanation of how this whole thing. So the running parallel on the CPUs explaining this in one single slide. So we keep actually reducing the amount of information that we can provide because people are getting confused and so don't be afraid. So here you see here at the very end that there is slides 12 of 35. So the half of these slides will be shown already later some on the day three. So now I'm just going through the very basics and here is one basic slide which we plan to kind of let's see that when you submit a job. So when you're talking about the cluster, when you, for instance, when you do something on your local laptop, on your local Ubuntu laptop. So you hear, you see that you kind of run something you do it from the command line or from graphical use interface and you don't really care because you're about resources, about sharing and because you are the only one on that particular workstation or that particular laptop. It could be also the case on the server if it's just the member of your groups who is using that server. So that should be just fair enough and you don't really, you just agree on in between that I run my simulation today and you guys, your mate of your group will run it tomorrow but on cluster doesn't work this way. So on cluster you're supposed to submit the job and it will be distributed somewhere else on the compute mode. Okay, but so the parallel run that means that when you run something on more than one CPU, well, it's just strictly speaking we are talking about the CPU cores. Nowadays, you see that there is this principle hardware limitation of the serial computing is that the CPUs are not getting faster, much faster as compared to previously. So the every new generation gives you some flops on top of that. So flops, that's the floating point operations per second. That's the common measurement unit which we use in the HPC and then in the computing in general to measure the speed of the performance of the system, flops. So you can get some flops out of it and out of the serial computing but then if you really wanna go for something else and then HPC provides you with the ability to run something in the parallel. So in the parallel run that means that you actually you get a problem to solve and it's being divided into the instructions and all these blocks of instructions are being sent to the different CPU cores. And then by the end of the day, basically one program can execute them and get the results back from all of the CPUs from all of these blocks of these and compact them back and provide them to the end user. So as a result, it could be you get a file, you get just the output to your screen or like. And one thing to mention here is that you can either do the parallelization within one node because as I told you already, so nowadays the nodes they have the many CPU cores as a general way. So nowadays for CPU cores is probably the minimum all around. If it's not something like, if it's not very simple device, then just normal your laptop probably have four. Most probably your desktop half six or eight, your server may have something like 12 or 10. And then for instance, our nodes around on the Triton they have already something like 40 CPU cores within two sockets. And that means that you can run not on one, but on several of them. And that means that you can run it in a parallel. So one thing about the parallelization, this is kind of what I was talking on the previous slide, I was talking about the CPUs, but the modern way and the most and most common nowadays is becoming more and more common. And actually I had to make this slide even more clear and this slide update keep updating it every year when we're given the kickstart for the crash course so that you are talking about the accelerators. So people actually realizing that the CPU performance is not enough, see CPU is not fast enough. Even if we're talking about the massively parallel kind of systems with four sockets with the number of CPUs, CPU cores quite high, people still come out with something which can perform better. And one thing, I'm saying the common word accelerators, so I don't need to be GPU computing into the heater just because the accelerators, they used to be different and they are still different, they are different, but nowadays let's don't go too deeply into this and say that we are talking mostly about GP GPUs. So GP GPUs stands for the general purpose computation on the graphics processing units. So the graphic processing units nowadays presented on the market, most of you probably have heard about the NVIDIA's Teslas. And then some of you have heard about AMD radiance in instinct cards and both of them now, well NVIDIA was used to be even year, two years ago just the dominant, it's still the dominant but AMD cards appear slowly but surely and the benefit of that, they're expensive but they're fast. So actually having one single modern NVIDIA card, this is what I have tested so far so I can talk about the numbers. So it could be 10 times faster than just normal computational note, imagine. But it could be also the one card like a 100 NVIDIA, even with all kinds of discounts like we are getting at the university, we are still paying something like seven, eight, maybe 10,000 euros. So that's expensive. And so, I mean, it's the valuable but they are the mostly most in use. So the next thing is how to use those GPU cards if you're pretty sure that you can use the CPU whenever you get the binary and you can execute it, you're basically running on the CPU. But whenever you get something which is supposed to run on a GPU so you do first step, you either come and look for the software or applications which are already ready to go with the GPU. So they have built in, it could be already pre-installed on the cluster in the repository or it could be something which is already just happenable and you have nothing else to do than just to get this application run it or maybe in the worst case to get this application compile this application and run it once again. So you don't think about the programming part but then if you are the programmer and if you are developing your own code so you may think about the available libraries you can easily kind of get the libraries to your codes and so you can outsource just part of your code to the GPU for instance, this is quite common approach and the most common actual approach. And then you have the native GPU programming when you implement something in the language. So the CUDA, this is kind of, it's a native language so which you can write the kernels which you will be executed already on the code. So that's the, this approach CUDA that's for the NVIDIA essential and then the AMD just came out with them with this stack of the software called HIP which is the heterogeneous computing interface. Basically, they do the kind of transforming of the CUDA code, supporting of the CUDA code to the AMD cards way easier to the C++ like language. Okay. And then on top of that there are several frameworks you may hear at some point that's OpenCL for instance and OpenMP that can provide you the code which can run on the NVIDIA cards and AMD cards at once. So, but I'm not going to put into that. So I'm just going, I'm showing you and do your very, very high peaks of this HPC computing in general so that you know the names and you know what we are talking about when we are talking about the GPU computing or parallelization and the like. So interconnect is something which you meet at some point when you run in heavily kind of in the massive parallelization. So as I told you already, for instance, your Wi-Fi, your Ethernet connection somewhere at home at your work cabinet or anything. So it has nothing to do with the high performance computing. When we're talking about high performance computing we are going to the special designed interconnected either 100 to 100 gigabit as a network or this is the another name which comes to you probably and you is the infinite band. So for instance on Trident, we are using the infinite band. Don't be afraid in a sense that it's just another one another one architecture, how to make the connections between in principle there's none else special than just the infinite band switch and then the number of the infinite band cards which are used to connect one node to another. The principle here is that it provides to you with a high enough bandwidth and with a small enough latency. So bandwidth, that's the amount of the bits that you can pass through the channel. And then the latency, this is kind of the response time for instance, you have sent one packet and then how fast it could be received. And then reply that yes, I have received the packet. So that's the latency. And here I have some numbers over here so you can just for sake of comparison that's the orders of magnitude if you can compare the just normal office network against this HPC interconnects that we are using around. Storage, so if I've told you previously that the interconnect is the number two critical component then the storage is the number one simply because the mounts, they are cross mounted on all the compute nodes. So there are no local disks. Well, there are sometimes local disks but most of the compute nodes, they're just diskless or headless how we call them. And so that means they can boot from the server and then you can just access as an end user, you can access the cross mounted storage which is part of the cluster installation. And this is something, imagine that if you think, well, this is one of the things which requires kind of really careful planning. So because at some point when you get up with the, well, you have one user, yeah? And one single disk for instance on your laptop let it be SSD, it's fast enough. It's fast enough to do something for you only for you. But imagine if there would be on the same laptop for instance, 10 of you, like you who is running the same application. And even if the application is not CPU time consuming or memory is not the limit but IO definitely will become a limit if you generate lots of input outputs. But on clusters, we have hundreds and some degree thousands of users which can run whatever they want to and this system should be responsible and it should be not simply responsible but it should be way faster than just single SSD drive on your laptop. And we do actually reach these numbers and we do these reach these numbers in terms of the amount of the input output operations and throughput so how much data you can put there per seconds. And then on top of that, we also provide space. For instance, nowadays, we can easily give to the users terabytes without even asking what for you need those terabytes. And so the new system might just open you a bit the plans so we're getting the new system which are gonna be there altogether provided to the end users on the Triton 5 database. Just imagine one gigabyte then one terabyte and then one petabyte. So when we're talking about storage nowadays we're not talking about the hundreds of terabytes but we're talking about now petabytes. So that's the thousands of terabytes. And these are the numbers which can be provided to the end users. And the managing the user job, I have described it already some degree. So when you look into the notes you're supposed to run the job and you don't really decide yourself where you go to run your job. It's the batch system which is on your behalf looking at the system, I mean at the available resources which is all the time trying to get out. Okay, this note is busy. I don't touch it. For instance, in my case, that's the red one. So this note is half busy. I can send them something. So it has for instance, several CPUs and some amount of memory which is sufficient for another job which is sent by the user from the login note. And for instance, some notes are completely empty and then we can utilize them and use them. So that's the batch system. And so in case of the Triton and CEC and many other resources, most of the HPC installations are using nowadays. So and you will hear now more. Actually, this is gonna be the part of our, part of our part of our tutorial. Am I right? Yes, tutorial. Which is coming just one and there would be the Slorm section. Actually, we will be talking about Slorm at the time but Slorm is not the only one. So there are the others kind of batch system which can help you to organize the jobs. Otherwise imagine, if everybody would have to its own decide where to go, there would be nightmare for everyone. And that means in the first hand, software, software. This is already what I have mentioned. So the software is one of the key components of the proper installation. If you have hardware, whatever independently how it spends it fits and fast is if you don't have a proper compiled software on top of that and maintain software, then your hardware is pretty much useless. And as a summary, as I told you already, so there are lots of components and every component is to some degree critical. So that means that this kind of puzzle of the HPC game is only possible if you have good infrastructure. So you have a good machine room with enough cooling, with enough electricity, with the fast interconnect, with the fast connection to the public network. Then when you have the hardware itself, so enough CPUs, enough memory, good accelerators like GPUs or like. So then you provide the end user with the storage which is fast enough, reliable, ready for the massive iOS. Then you go to the software part. This should be up to date operating system. This should be a reliable batch system which can fastly and reliably distribute the jobs and monitor the system. You're supposed to be able to find all these developers tools, libraries and top level applications. And the last but not least, you're supposed to get the support. So the stuff which is around the cluster, which helps you, then the stuff which helps you with setting up the projects and fighting the most efficient way then you're supposed to get the education. So that's what you're getting right now. And the most probably important thing as usual that's about money. So we're supposed to get also the money for the project to keep it running. As I told you already, so just one electricity bill brings us something like 100,000 years a year, not talking about the hardware purchases and the salaries and the like. So, but the parallel part, it will be taken already on the day free. So I'm closing this one. Thank you, Ivan. Yep, if there is any questions, should I highlight anything from the HackMD or if not then... You're keeping an eye on HackMD and there was nothing. So we can reply it. So I stop sharing now and give the floor to UC, I guess.