 let's take a look and see. So, Sima, what's interesting on this page? What is Triton? We talked a bit about clusters yesterday, but maybe let's actually hear some more about it. Yeah, so Triton, like we talked yesterday, is an HPC cluster, so high-performance computing and what it entails is that it's basically like a huge bunch of different computers like that are connected with this fast network and they share a fast file system that everybody can access and the access to the compute nodes, as they are called, like these computers that are designed to handle the computations, is through the login node. There are other ways also, like the Jupiter and stuff like that, but the main way of getting there is through the login node and because like the system is very big, there's lots of these different nodes and there are different kinds of nodes. It's a heterogeneous system, so we have various different kinds of nodes with various features and various possibilities because it's such a big system or so complicated system, there has to be a way of organizing work in the system itself, like who runs everything, who runs and what, if you would have like let's say a shared server or shared machine where 400 people could just run whatever they want, it would end up in Anarchy, basically everybody would compete of the same resources and nobody would get stuff done. Yeah, as an example, there's these shell servers for students where they can run code, but there's none of this batch system, so anyone who needs to run stuff can use the whole system so you can have 10 people trying to use the whole computer and then no one gets anything done. That's even worse because since people are contending for the resources, it's basically less efficient than if everyone requested only a small fraction of it in the first place. And when we are talking about resources, we are talking about what we talked in yesterday, so CPU time or time itself, CPU, number of CPUs, RAM, so random access memory that the applications will use and possibly GPU resources, so these like compute cards or access to these compute cards and everything is calculated based on time, so one hour of CPU time is a resource. So all of this is handled by a Q system, like Richard said, dispatch system also known as Q system, so we have this kind of a Q where people can submit their requests for resources and when the submissions are available somewhere in the cluster, they are run there. So we have this Slurm Q, there are other Qs in other clusters such as PBS and Cray Q and all kinds of stuff, but we have a Slurm Q. CSC also has a Slurm Q and all of the Finnish affiliate universities, all of them have Slurm Qs as well, so the basic Q structure is the same or how do you use the system is the same, there might be differences in configuration and some of the flags or these kinds of instructions that you need to give to the Q, they might be a bit different, but basically there is similar kind of a Q system everywhere. Yeah, pretty much. Let's see, anything else? We talked about a lot of this other stuff yesterday, so the skills, the getting help, so should we go to live examples then?