 Hello. So we're preparing for the panel discussion. We'll have different people around here. And if we look at HackMD, we see there's already some seeded questions here, both from earlier and some new questions. Please continue adding them. And the basic idea is we'll go through these in some random order and give comments. You all can also be adding your comments directly under the questions, just like the rest of the HackMD stuff. And also maybe some of us will be writing our answers instead of speaking them. Let's see, so hopefully everyone has the HackMD link. So yeah, do we want to select a moderator for the panel discussion, like someone to keep us on time and keep things moving? I guess I can do that if no one else wants to, if you think that's not a conflict of interest somehow. Well, okay, I'll try to keep us going. So during the break, we were already quite interested in this first question here, so notebooks versus scripts. So there was a bit of a discussion in HackMD above. So yeah, do you want to go through and each person can give their comments on the matter? Because I probably see that this during the tools talk, I quickly mentioned that in the tools material, there's way too much material to go through. But if you're interested, you can look at it, there's links to other stuff. But yeah, about the notebooks. So to me, at least the notebook is inherently like interactive and it's meant to be this kind of a way of like making it easy to do interactive stuff. Like plotting interactively or doing interactive, like going through data and stuff like that. So scripts are a bit more like ambiguous. You can run them the way you want. You can do whatever you want with them, but it's more like run and done. Like you run the whole script usually, you just run the whole script instead of like notebook where you like play around. It's like a playing ground where you have all kinds of fancy things happening around you. Thomas. In general, I would agree. For me, notebooks are nice to present some workflows. They are not really good for any kind of automation. What I think is, yeah, they are, as Simo said, inherently interactive, so you can't just run them. And what I personally would do, at least where possible even in a workflow, try to exclude as much computation from the notebooks and put it into functions that are called from notebooks, because then I can reuse those functions and reuse that functionality. And also use it in a script. And then it's also much easier to just take the commands or the commands from that notebook and transfer it to a script. If there is a lot of functions or functionality in that notebook, it becomes a mess. It's not nice for presentation anymore. It's not easy to interact with it anymore. And it's getting more and more difficult to use. You see. Yeah, I don't know if I have that much more to add there. So it's, I think, really is indirectly playing around. That's that's one major difference. And thinking personally, I think I use notebooks mostly when when doing, let's say some kind of analysis where I need to maybe modify a bit of data do some computations. I really want to sort of combine some bits and pieces of scripting some graphics and maybe tranquill document have have maybe some maths there. I think that's that's nice thing that you can you can really have a text with that they had an equations there. But then it, if really it's sort of more extensive computation, then it's, then it's better to sort of make make a program out of that. And I think something mentioned by Thomas that actually try to make your functions or with the make your modules, which you can actually then use both from the script and and from the notebook itself. I think notebooks are really, really wonderful as supporting information as supplementary material to our publication. I think they were great for the use case where you read data, then you do some statistics at the end to plot the figures. So they are great for linear workflow, because it's cell after cell after cell after cell. Great point about that they are inherently directed but they don't have to be consumed interactively I mean they can be just consumed as a reference. How was this created for situations that don't fit into this linear cell after cell after cell after cell, then they are less good. Yeah, so my thought was, yeah, like you can do many things similar in both like you can use a notebook like a script or so on. But I think the main difference is how it's used so most of the time when I see people using notebooks that are getting out of control there's a whole bunch of stuff mixed together. And like there's no clear starter and there's like in order to get your results you have to run these 10 specific cells in a certain order, and then that's like that's not very reproducible. On the other hand you can make a notebook which is always run from start to end, and that's very similar to a script. So it's all about how it's used, and eventually you need to start making things more modular. And then, well, it's both leaving the notebook, but then also organizing your notebooks a bit better. Yeah, I would say that like sometimes it's easy. The freedom of notebooks makes happen. You might end up in a situation where you run the cells and then you go back in order you run some cells and then you go to a different place and you run some other cells. And at the end you get an error or something and then you ask, why did I encounter the error? And the error might just be because it had strayed so far from the logical path of the like you had basically like went back and forward in the script. So it becomes like a time travel movie where you cannot anymore follow who's where and what's happening here like it's too complicated. And in those kind of situations it's usually like good idea to stop and think about like what you actually want to do. And if you have like I want to do this with these things and I want to get here, it might be a better to like make the script easier to read, make the movie easier for everybody and make it linear so you don't have like back like this kind of like there's a jumping back in time and forwards and this kind of nonsense in your movie plot. That's actually I think also the big danger of notebooks that you can't really be sure what state you're in. And you can end up with a state where you suddenly get really nice results for something because one parameter that is named whatever changed because of a later execution of something. So you went back to something from earlier, and it can completely mess up your analysis. So, at least for final results, or result determination I think notebooks can be really dangerous. Should we move forward to the next question on the docket there's a lot of them. Yeah, so through them. How to version your data, which I'm really curious what the answers will be any takers. Yeah, this is a kind of topic. Yeah, yeah, go right ahead Thomas, if you have. No, no. No one was the first one here. Yeah, so I personally go like, like, there's no no one solution, because like, it depends on what the situation is like if you have the data isn't a lot of data. You can just like keep track of them in in like a file or like a like a data sheet that contains like what data files you have and what parameters did you use and stuff like that. If you have files that are big data so called you need to think about like also the question also brings into mind that do you need to version the data like in many cases data is generated automatically based on a code like you have maybe some original data that is like you might have different tiers of data. You have original data that is like somebody went to the deepest jungles and they made some measurements and you cannot recreate that data. Like that's really hard to do. And then you might do like processing on that and recreate the data. And that data can be something that can be recreated always if you have the original code. If you if you version control the code you also version control the data basically, of course, some data is very might be consuming to calculate and in those cases it might be like there's always hard to like version control usually hopefully you don't need to have multiple copies of it. But yeah then there's no one one good solution for that. I would like just warn against like storing everything and keeping mind because then then like you don't have any. Like it's the same as let's say with music like if you compress everything in the audio so that it sounds loud everything sounds loud and there's no dynamic range anymore and then you cannot. Like this might sound this might not matter to you, but but you don't see you don't hear anything anymore. And that same kind of stuff can happen with data that you have like you if you treat every every piece of data as equals then you might end up in a situation where you actually lose the important data because it wasn't stored at the correct place. So it might be good idea to differentiate between different types of data and version control that data maybe that is actually necessary. Any other takers for this question. One point for this data is that if the data is something that is produced from computation from a code. Then if possible it would be and as said I mean as Simon mentioned, in some cases it's in at least in principle, it's possible to sort of return it to data. If you can somehow attach to the data that with software, and if possible with version, maybe sometimes even to get hash that was used to create the data, what kind of input parameters and so on I mean the input files for scientific computing they are not necessarily that big. If you somehow keep that data, that that information together with the data, it will really help in reproducing the data in later on if needed, or just checking out that okay if everything is okay with the data. So, if I said that versioning data is very specific to your workflow. Do people agree with that or think there's some general strategies. No one knows. Okay, so this is a hard question. This was, I think what you said is probably the most sensible approach to versioning data. If your data is generated by yourself. In any way than version how you generated it so that it can be regenerated. If you get data from outside. There's no real way to version the data. The only thing you can do is version, which output is produced from what at what time point with what input per input information. So if you have, let's say, if you get additional users, you could version. Okay, I had users one two 3000 at this point when I created that output. I have one two 4000 from my database. That is something you can, but that is essentially the same. Yeah, store the metadata for what you're doing with it. One thing that I produce should I personally like try to listen myself is that like the idea that how long does it like if this data is gone how long does it take me to do it again. Like if it if I know the code and I know that the code can recreate the data and I know it takes computation time, but it takes like, let's say a week, it's nothing too serious. But if I know that okay, like my whole like my thesis is relying on this one data file and it's like it's on the CSB drive somewhere. Then you think that okay, like maybe I need to like make it in check. And similarly, if you have a big project and if you reach point where it's like, okay, now we are getting something and now it's important to like probably keep like it's important because we like now we are at the point where we would lose a lot of information. So I would look at it from that point of view basically like how much are you willing to lose if something happens and usually it's yeah usually it's I would say like a month's work or something how much I would be willing to lose. I think we should go on. Yeah. So there's two upcoming questions. What to do when you can't get help in your organization. And what do you teach in a code refinery workshop is it worth taking it. Any thoughts come for help. Yeah try to find the relevant groups for your specific questions or for your specific field. If it's for computational questions, for example, Nordic RSE or the code refinery as such can be good resources and are normally well happy to help. Like lately, lately we've been doing a lot of thinking about how to build these. Yeah I just wanted to add that maybe we don't know the answer either but happy to get more questions there, like on code refinery yet or on Nordic resource software engineers coffee break to pop in. Tell us what the problem is and we will we can maybe try to connect you to the right community the right people the right solution. And also answering the question, what we do in code refinery workshops. We teach things that Seymour has mentioned a version control how to collaborate with colleagues how to document how to test how to have a reproducible workflow environment documentation for our codes and scripts and visualization to be reusable and not only for other people but also for our future selves. And is it worth it I think so it's, I think it's very worth it I'm biased. Oh, I hope to see you there. I was. No go right ahead. I think lately we had a lot of thinking into how to like, oh, lately there's a lot of recognition that the support of computing is really important and often lacking. And we're trying to make things better. If you're in Finland CSC is always available. Okay. Yeah, I would I would definitely also mentioned that like, pick and choose like there's no like usually none of this costs anything like like that it's available on the internet. It's available in the wider world. It doesn't really matter if it University in Japan has written good instructions for their cluster if they can introduce you to the correct concepts that you can then use in your own system. So basically like, if you feel that you found good resources somewhere else, like for example, MIT has the courses in YouTube and like I often go and check. Well, YouTube anyways like what's what's there, what's new there, like somebody else can introduce the same concept to you better than we can or some somebody else can. And if you feel like that helps you then those channels are always good. It like, it doesn't matter that which way you really learned the thing if it helps you. These are good, like code front is excellent place. But if you feel like if you find like internet is wide, there's lots of stuff there as well. Yeah. Next up is a question keeping track of requirements and dependencies for example packaged updates in our Any thoughts on this. Well, I can quickly man. Yeah, go right ahead. Now you start my main reason why I don't really like our because the update cycles are too fast. Yeah, and I think it's difficult. Try to in your own code try to specify as good as possible what versions you're using. If if you are using versions especially in our because things are changing so fast. And then was case stick to those versions, unless you need some newer. Yeah, unless you need some new stuff. It's all it I mean, in my opinion, it's always fine to use old stuff if it works and if you don't need the newest features for something. Don't use it. If it makes if it makes your code better by using the newest features. Okay, then it makes sense to change but if it doesn't really need to improve your code as such nor your performance. There's no reason to update, except if there's security issues but yeah, that would be improving your code or your performance. Yeah, I would also say that that often like like if you look at, let's say, models or machine learning models or something that are available in GitHub that people have like published. And the packages are really outdated that they won't be like super outdated. That often also like, like this, this technical solutions for these those technical solutions such as the single entity containers to try to like make the whole operating system into one file and all kinds of stuff. Like in the in the question itself there was an interesting technical solution I hadn't heard of, but it's, I need to like, how we investigate. So they are usually technical solutions of keeping track of the environments of what you need. But there's also like this kind of like, how could I say it like if you start a project, and you want it to be used. Usually you need it to be like updated. And there's the easiest way of keeping it updated is to make it publicly available and easy to contribute to, because then it like if somebody says that okay in the new version of this packets you get this kind of a bug. Then people can in GitHub make like a pull request or like a bug reported okay like I made a fix for this can you add it to this and then it fixes the problem. And you didn't have to do anything than to click maybe a few buttons and answer to somebody if you if you like feel that your code you want it to be available in the future. Otherwise, like everything rots code rots like it, it doesn't stay fresh anywhere like, and it's a well known problem throughout the world, not only on scientific computing, and like the easiest way is to is to keep it alive. But if you don't feel like keeping it alive. Well, sometimes, like eventually it becomes so rotten that nobody can anymore do it and that's a problem in scientific computing reproducibility issues are common. So, yeah, I don't know. It might sound a bit sullen, but like I would personally like just try to try to keep track of the requirements and maybe use some some of these tools. But the main thing is like getting a community around the project and that will make it so that it's easier to update and it will stand the test of time. And one option might be using some sort of IOLAPR like HDF5 where you can actually organize your data bit like, I mean, you can really have the same kind of structure as in folders and files in normal file system. It has also some of course you need a bit learning how to how to use that, but it has also some other benefits for what can store data quite efficiently. In principle, you can use it also for parallel input and output. You can have metadata. They're actually describing your data sets and so on. So that might be one option. One other option I would probably recommend is to think about, do you need to C++ to handle this thing? So can you utilize some of these more generic languages to, let's say, launch the program and then do the computationally expensive stuff on the C++ side? So all of these languages that are presented earlier, they can be extended with, let's say, C++. You can add these interfaces in all of them are C++, there's Python has C++, various different libraries and Python and Matlab has its mix things and Julia has, I don't know what it is, but they will have a C++ interface. So is it possible to use, let's say, as a client like this kind of like, if you're running your C++ code, run it through something long, just the C++ thing that creates all of the folders and handles all of the user interaction and stuff like that. But the actual calculation is done on the bottom side with the actual code. That might also be an option. Yeah, there is actually for HDF5, it is very easy to use Python package H55. So that's, and I mean, even if you would be using the library, primarily I mean, generating the data with C++, you can actually then with few lines of code from Python, you can actually read the data and do the analysis. Yeah. Also, Simo, haven't you given some talks on data analysis, like formats and stuff like that. If the question goes beyond you doing something with CSV and so on and to what are good formats for data. Yeah, and there's a lot more. Yeah, nowadays PyArrow, or arrow as a whole, PyArrow is just a pattern implementation, forget that, but arrow is this kind of like representation of how the data is represented in memory. And Parquette is the data format for that. That is for like data sets, if you have like table type data, that is the most common data format nowadays. But yeah, this is a separate data structure that you might be interested in as well. Like HDF5 is great also, mainly, it's mainly used for like 3D data or 4D data, like this kind of big data blocks, but it's really good as well. It depends on data. And I guess it's more for if you have a numeric data, so if you have a mixture of, let's say, text and numbers and so on, then it's maybe not, then there might be other options that are more suitable. Our next question will be really interesting. What's the role of containers such as Docker and Singularity in scientific computing? Well, maybe. Oh, rather one, yeah. I mean, one role is to that in a container, it forces me, if I use containers, Docker or Singularity, it forces me to write down all the installation steps. So one role and one benefit of doing this is that it's documented. There are other benefits like it's actually like isolated and I can run a Linux container on a Mac and I can run a Ubuntu container on an HPC cluster. But for me, one really huge benefit is that it's documented for the future. And then it really helps for these pre-producibility. So what we discussed that, okay, if you sort of try to document the software with the data and so on, I mean, with the container, it's sort of, there is one specific container. And one, the operating systems, system libraries in supercomputers, sometimes they can be a bit outdated. And if you have a software with some complex dependencies, which has been sometimes maybe developed more for, let's say, standard Linux box, it might be that sometimes it's a bit more, might be difficult to install in supercomputer environment than containers can really help there. I think if you're, and for performance point of view, as long as you stay within a single node, I think the overhead from containerization is relatively small. Then of course, if you want to run with multiple nodes, then things might get, I mean, you can still use containers, but things get more hairy. Yeah, I would say that like Docker is very popular nowadays, because like you can run it, like if you have the problem with Docker is that you have to own the basically the machine where it runs on, you need to have a cool protocol access to it. So, but it's very popular when you're running stuff on, on like virtual, like in the in the cloud, like Amazon or somewhere. But it's, it's like, it's easy to do. Well, easy and easy. It's good for development when you want to like contain the whole system in there. And the singularity is good way of doing it on the HPC side. But both of these, they are really like you need to have the correct nail so that it's a good hammer for it. Like you need to have this kind of a problem where you need the specific system, you need the specific things to happen so that you run it. Your code doesn't need that much. Like if it doesn't depend on, like it needs to run on over on 280, you know, for something like it needs, if it doesn't have this kind of a strict dependencies, it might not be worth using them because they might cause problems. But at the same time, it might make it your code is more easily reducible for some somebody else. It depends. Maybe before we go on, I realized, can someone define what a container is and how it works for scientific computing? Well, I would say there is no difference in container in scientific computing versus computing in, in general. But I mean, please define what it is because we're assuming people don't know what it is in general. With container is like you could think of a lightweight virtual machine. So when you have a virtual machine, you have sort of the whole operating system kernel is included there. And with container is it sort of you use more of the features of the host operating system there. So it's you are not sort of virtualizing everything there. So it's sort of sandbox on top of the host operating system, which can have, I think it typically uses the same kernel as the host. So you can have a sort of different libraries on top of that. Yeah, I would maybe say give it this kind of analogy that if you've seen the movie Truman show where the main character is business in this dome where as the main character of a reality TV series and he doesn't know he's there. Container is basically like that, like it's, it's not the actual world where he is living in, but everybody else is pretending that it's like, like, okay, this is the world. So, so basically it works like that. Let's say you have an application that requires some different different things. Your application is basically the true man he thinks that he is working going to work at the normal day. But in reality, everybody around this, this person has been like a play their place their role around them. So basically the whole operating system everything like that is like fake or not fake but it's like coming from outside. It's defined by the developers of the system and the real world what happens outside of the dome. That's not something that is like that's not something that the person inside sees so the person cannot see outside of the container. So basically you your application can do whatever they want in this fantasy world. And then they they can do different stuff. So I would say it's it like that. Okay. Yeah. If there's no more comments. The next question is, should I write scripts like a functions in a series executed or go for classes when you want to program a flow process. Hmm. Maybe we can, to me, I would sort of think like the question isn't much what programming techniques use but sort of how things fit together. Overall, like, um, like, I sort of have a feeling that the most important questions are at a larger scale than this somehow. I might have one opinion on this. And that is that, like, many frameworks, let's say, let's say, give an example like a pie torch framework, you have many things that are written as classes there, and you can basically reuse lots of the code available in the framework if your code extends these classes. So instead of like writing everything from scratch, you can make like a pie torch data set or whatever, like you can, you can, you can make your thing of the similar type as the other, other thing, like of the some class in the framework, and you can reuse a lot of the existing stuff. So then you just have to maybe implement few functions there and there and it will do you can reuse a lot of the other things. So if you are using some framework and you can basically like get stuff for free if you if you utilize their framework classes are a great thing, like if you have already existing like infrastructure so that you can use. But sometimes you might end up in a situation where you write a class and you write methods for the function. But in reality when you run the stuff, you only have one instance of these at any time. So actually functions would have been enough. And then having the class might make it, let's say memory management a bit more complicated because like what's in there and what's self and like, it might be easier to handle individual functions because you can test the individual functions that it doesn't, the functions don't like don't depend on the state of the object itself and stuff like that. So, but I would I would use classes whenever I can like get something for it. Otherwise, I would probably go with functions. I just wrote in the hack MD how maybe the real question is about workflow managers so maybe you shouldn't be writing a whole, like if you're connecting lots of separate things that are going together. Maybe the answer isn't to make these connections yourself, but you have something that pipelines the different steps in the process. And actually one of the lessons in the code refinery workshop is about reproducible research and we go into things like some of these workflow automation tools like stick make. The next question is related to code organization organization how to handle boilerplate. I can see plus plus efficiently. Should you start writing classes at the beginning or use them as a way to refactor code later on. I can say one of the, I don't know where I read this I read something called the rule of three. And there's actually many rules of threes for many different topics including many in computer science or programming. But if you do something once, just do it. If you do it twice, do it again. If you do it a third time, then it's worth trying to see when the, like, how often you might be reusing it and try to make it really good and reproducible. But I don't think is quite correct because there's sometimes I know I'm going to be reusing things many times. So I'll do my best from the start. But the reason for this is saying that until you've used it in several different contexts you don't really know how general it needs to be or what all the different possibilities are. And great point that we cannot really anticipate what we will need in future. So first, first step is, you know, we get it to work. And maybe this is good enough and maybe we are already happy. If we need to change it, it's good if it's easy to change. So I would, if I anticipated that I will have to change the code I would start in a way that it will be easy for me to change it. And it doesn't really answer that question, but for me these two questions about should I start with classes should I start with functions they are really related. It's hard to answer. But make it easy to change if you anticipate you are going to change, but let's get it to work first later maybe let's make it faster and maybe let's make it nicer. And I said I think it really depends on your problem if your problem really naturally fits into classes then I think it's a good idea to start with them if you don't know in beginning that maybe it's a bit easier to start with functions. Not directly related to the question but I don't know I gave a really, really good point here, make it work. It doesn't have to be fast, having fast, at least for example for auto, that's for example what the RSEs can help you is making this stuff fast. If it works, if it doesn't work it's always more difficult because someone helping you doesn't really doesn't necessarily know what exactly your aim is, and doesn't necessarily know the complete logic or why you're doing certain things and takes a lot longer to get into something, but make it faster that's very often the same kind of procedures where you can switch some some inefficient loops for direct for matrix computations that are quicker because they're implemented in. If you come from Python from NC++ or even if it's in Matlab, because they are just inherently faster than any for loop. Get it to work, how it looks initially. Yeah, it shouldn't look too bad but things can be improved. One thing that also came to my mind is that like I personally wouldn't know the answer to this, but I would probably myself if I would have to do start like an UC++ project now, like immediately like start one. I would look up some style guide somewhere and just go with that like look at the style guide that looks most pleasing to my eye from, let's say, I don't know, there's plenty of them, but look look some style guide that looks promising and maybe something that is related to where I want this to connect to. Like if I know that I want the pipeline pipe to go here and I want data to go there, then look at what they stand that is like how do they write the code and maybe it may try to use write similar looking code because then at the later later on it's like if the pipe is round but the little smaller I need only a small like small intermediate piece to connect them together. But if the other pipe is a square one and the other pipe is triangle one is it might become quite complicated to connect them together so like looking at what what is the standard around the thing you're trying to work with and maybe go with that. Any other thoughts on that. So next we have a question what's the difference between HPC and scientific computing. Can someone give a quick definition of HPC, although we'll see more definitions definitions of it tomorrow. I think what what is written there is is pretty much it so I think using computing power, much larger than you have in typical desktop machine which of course means it's moving target so my laptop 30 years ago it would have been the world's most powerful supercomputer. And scientific computing you can do with your laptop already depends on your problem so scientific computing doesn't need HPC and not all HPC is necessarily scientific computing even though most of it is. And I would actually say that HPC is a kind of a miss. Yeah, misnomer, at least how it's commonly used, you have an HPC resource that allows a lot of people to use. Yeah to essentially X. Put their work away from their machine. And that's currently what the HPC cluster is. There's a subset of those where they actually need more powerful resources and they need more multiple piece or multiple nodes and stuff, but that's at least on an HPC cluster. I would almost almost say the minority. The other things are just they, yeah, they have individual things that they are running outside, or they have embarrassing parallel things that yes, run them on multiple machines and you only have one. But that's to me not the thing that I would define as high perform needing high performance computation per se at least. I would almost say that the high performance is a bit of a misnomer to me like like it doesn't really tell that much like I would put it more like resource intensive or like like something like like basically there's there are problems that require so much money that nobody can provide everybody like the machine themselves like that's the situation so basically ministry of education or the university will fall like Academy of Finland or somebody gets gets the money done. And for that we need a system that can like provide then like value for that money. But to happen it needs to be a shared system among multiple different fields multiple different users and it makes it like a shared system and it this brings like this kind of a like its own tools and own things that you can do it. And but it also gives it the own benefits that you can do big stuff on it like you can do bigger things that you can do on a laptop so it's it's like. If you have plenty of like big resource. Maybe big resource computing would be better or something like that. But like the performance comes if you utilize the big resources but the performance doesn't come like. Like in the high performance car like basically the ceiling is high but the floor is the same as in your laptop so you need to utilize it to its full effect to get the resources or get the stuff done. But it's basically like a place place where you can run stuff that doesn't fit your laptop somebody else pays it. So we're almost running out of time here. There's a few questions at the bottom. Which were answered already above. Please keep on adding the question we can also answer during the break by text. Maybe this physics students like to write C++ and so on. Is there anything else to add other than what Radovan has written there. Maybe we're running out of time so maybe we can go ahead. I probably want to answer the next question that's been asked in multiple ways in many situations like the question of how do you estimate resources needed for SBC. We will talk about this later on the coming days when we actually make these resource requests. But I would say that the baseline is that if it doesn't run on your laptop. If it runs on your laptop you can look at the back of the box of your laptop or whatever how much resources your laptop has and ask the similar kind of thing. Then we have usually tools that you can check whether how much it actually used like after the fact. If it doesn't run on your laptop then you take some multiply most likely the memory with some factor and then you check if it runs and if it runs then you check how long does it work. Then I would use as a benchmark like as a comparison laptop. How long does it take put the similar kind of like if it took four hours on your laptop put similar resource request four hours. Similar amount of processor, similar amount of memory and see how it goes. Yeah. Okay, so I guess we need to go for a break now. So what happens after the break is we get started with connecting to the computational cluster. So this is really a preparation for tomorrow where we go and we get hands on with the cluster and go doing all of these things. So it'll start with a little demo and then there'll be private time for the learner zoom breakout room if you need hands on help. Yeah, and the rest of the session, the rest of these days is really about using this cluster so like we were discussing what is HPC. What are these different ways of using the resources. So we'll start with the basics of. Let's see, we'll start with the basics of like how to connect and how to run programs and then see how to get more and more resources in different ways and how to make your program usable here. So some of you may have come just for the first day and won't attend the other days perhaps some people will be coming just for the later days and not today, but we will make it all work for everyone. Please feel free to continue adding in more questions in HackMD and we can give answers via text during the break also. Anything else to say before the break. Okay, I guess not so we can break for 10 minutes so we'll start a little bit after the hour. Okay, so great. We'll thanks everyone who was here so far. And I'd like to thank all of our panelists here, Radovan, UC, Thomas and Simo for making this a very interesting discussion. And we will see you in 10 minutes. Bye. Thanks, bye.