 Okay, so about ScienceIT and Triton. So actually we were just talking a little bit before we started here about how we need to adjust some of these pages. So you see ScienceIT written everywhere here. And that is technically who we are, but isn't what we're calling ourselves these days. These days we call ourselves also scientific computing. Because ScienceIT makes you think we're about the Ulta University School of Science and we're not. We serve the whole university and IT makes you think that we do like laptops and mice and monitors and stuff like that. And we support the computing via the infrastructure. Yeah, and it's not only related to the IT part. We also do like consulting for researchers. We help users design their codes. We help them manage their data. We do all kinds of stuff that it's related to scientific computing. So IT doesn't necessarily ring all of the bells. Right, yeah. Actually, maybe we can talk about the other components we do. We have the infrastructure and we have training like we're doing now. And like Simo said, we have a research software engineer program which can basically go directly to you and help you with your software. And this is especially useful sometimes when you're trying to make something run on the cluster and it just doesn't work right. And then we need some help getting it to run there and we can provide that support too. So the main focus of the day is Triton, which is the name of our computer cluster. Simo, would you like to tell a bit more about that? Yeah, so Triton basically, it's a well, mid-sized cluster on a larger side when it comes to like university versus own clusters. So many, there's of course bigger fish like CSC that support the whole Finland who are super clusters and so forth. But when it comes to the university-sized, university clusters, Triton is basically one of the largest in Finland. Yeah, with Helsinki University we are probably the largest. But the size in computing doesn't always mean everything because like we are basically the first stepping stone for people when they go for computational needs. So basically we can provide much more like interactive support because we manage the cluster ourselves. And our cluster is basically, it's a heterogeneous cluster. So we have from many different generations of CPUs and GPU accelerators. And then all of these are in this Q system called SLURM. So basically you can access all of them through this Q system. And they have a shared file system, this Lustre file system that is also available to view on your desktops and so forth for data housing. But basically our cluster is like this constantly evolving bunch of computers that we're constantly adding new nodes. We're constantly removing old nodes and we are moving the cluster forwards. And it's like a continuous project. And the idea is that it will work for serious research for most of our researchers. But for those people who also want to go to larger scale computing, it will also work as a training ground and like a teaching ground so that they can access then the supercomputers of the CSC and Lumie and so forth. Yeah. You could wonder why the each individual university has its own supercomputer when like there's ones at CSC or like these national computers. And I think really here the benefit is the integration to the auto environment. So basically instead of having to copy things back and forth all the time, you can work on the same data on our cluster on your desktop on your laptop on the virtual desktops of the university and so on. So, yeah, and I think you'll see more of this when we get to the data analysis part of things. And talking about the term supercomputer in this sense, it means that it's more than one computer or it's it's it's it consists of multiple computers. So basically, the when you think about a cluster, it's basically what it says in the team. It's a cluster of computers, a bunch of computers in in racks, joined together by a high speed network. So basically, it's just a bunch of computers, but then there's because, well, there's a bunch of computers, and it would be really hard to manage the how people would use them. We use these cues and stuff like that. So that we actually efficiently use this bunch of computers. So it's it's like, in a sense, one big computer, but also it consists of multiple smaller computers. That's why the shell course on Friday was so important, because you can't just take your code and run it the same way and expect it to be 100 times faster, because it won't be. But you can run it 100 times at the same time, and then combine all the results together, and then you get 100 times more work done. All of this will become more clear when we go to the part where you actually submit jobs. But for now you think that it's not a mainframe like if you know the team mainframe like in the olden days where people had a keyboard and a display connected to one computer that was basically shared by all of the different displays. It's not that kind of a system. It's a bit different. Okay. So what skills would you say are needed to use a cluster like Triton? And that's a good question. So we here we have a link to our training page. And so sometimes it can be a bit of a culture shock when someone tries to go from their laptop to the cluster. There's just a lot of different things to keep track of. And like the Linux shell, all the environment get all of these kinds of things have to somehow work together to get the work done. So I guess we won't go into much more detail right now. But we have a modular training plan, which sort of brings you up to the level you need to do different types of work on the cluster. So this is the one of the introductions to that. Yes, we have also many courses that are some are more specialized, some are more generic about Python, our Linux shell like Richard mentioned, data analysis, MATLAB, GPU programming. So do check on our scientific computing in practice lecture series for upcoming courses. You will get mail notifications through the Triton users list as well. Yeah. Okay. And then what about getting help because you often need to ask us for advice. So, yeah, there's, yeah, go ahead. Yeah, there's many ways you can ask for us for help. If you have a sensitive issue or account related issue or something like that, the easiest way is to set send an e support a request to our service email address. If you have an issue related to what well what happens within the cluster, like you're missing some software you have some problem with your code. A good idea would be to set up an issue in our issue tracker, which is in auto version control. That's the main portal we use for solving these issues. This is really important because they're just such an overwhelming amount of requests we get that if we can't track them. And if we can't use the same answers to multiple people, then we just can't give can't give the support that's needed. So if you send us something and ask a question and we direct it to the issue tracker, please don't be offended it's just what we need to do to get things going. We have a chat system that isn't mentioned here right now, perhaps we should talk about that. Yeah, we'll have. Yeah, we're taking it into production right now basically. So, we will add more information here but we have set up this Zulips chat that is meant for like every, well, all of our users can join us in the chat. So this is a chat system that we have been using for past few months for internal communication, but we may we wanted to become this kind of a hub for users as well so that users can join us and have the discussion about well if they find something like more informal discussions as well but also more technical discussions and sort of, well, we can have a better community all together especially during these COVID times. Yeah. We can't promise to answer every question by chat and remember it but we'll do our best and also we hope that you can answer other people's questions via the chat too. Yeah, and also like it might be a fastest way of reaching like many people at the same time because if you don't have an issue you don't necessarily watch the issue tracker but you might check the chat. So if we have something interesting coming or there's something interesting happening in the scientific computing world, the chat might be a good place to share that kind of information. Yeah. Another great thing we have is a daily garage session every day at one o'clock. Well, I guess not today because we'll be here, but it's basically an ongoing Zoom meeting and you can join and ask us any question about anything scientific computing related. Or I guess not but who knows what we'll be able to say and there'll be multiple and there'll be multiple of us there and we'll sort of chat and find who can answer your question the best and help you reach them. Yeah, it says on the documentation that it's weekly but like Richard mentioned it's daily because of these remote times. So every day at one we are at this Zoom meeting and sometimes you might like if you have an issue we might discuss it in the issue tracker or e-sport and then we might tell you to come to the garage because it's much easier to share the screen and show what's happening. And so the issue, well, live instead of via messages so or if you want to like design a workflow for yourself or something like that, you need some specific information it's much easier to have a discussion about it and have a ticket. Yeah, I think this is a really underutilized thing we have lots of time there so please let people know and come by. So, um, yeah. So next up is the connecting to Triton tutorial. So maybe we should go straight to there. Yeah. Okay. We'll click on next which you can also find from the course page.