 Good. Welcome to this tutorial. The first thing we need to do is we need to log into the Amazon Cloud. Obviously, Amazon, where we currently has provided some instances for us, and Christian is the expert here. But with all modern technology, when you need it, it is playing a little bit, not the way you want it. So I'm doing the exercise and Christian does the comment. So that is international teamwork now. So I'm going to share my screen just to make sure you all can log in and then we do the tutorial. So you all should see that screen now, hopefully. This is how you start. And you click accept terms and log in. And now Christian is giving me all the... Yeah, that's maybe the even the easier experience I instruct you. No, that's our event engine, as we call it, a system just to provide such accounts. And we go to the AWS console. Yes. Then click on the open AWS console link. The rest is if you want to do it in another fashion. Of course, I mean, if you're already locked into AWS, in AWS console, you might need to log out here. That's why if you would need to use a private window, like what is it called in cognitive window? Then let's start all over again. Or you could just log out and then if you click here to log out, click here. On the next to the Amazon Web Service sign-in. On the top you are already in. Here we go. That's where you lent. On top there is a search bar and you just type cloud9. It will auto search here. And this is the cloud9 environment. Cloud9 is just an IDE. It's like Visual Studio Code light as a web experience. And then you click open IDE to open the editor. And we could do this in multiple other ways as well. But the beauty of this is that we don't need to debug how you locked in, whether you used your Mac or you used your Windows or whatever. This is like a web experience. So it's easy to debug. And also if you lose your connectivity, you just go back to this side and it will keep up or pick up where you left it. So that's also easy. And you can go to this preference pane and in the themes or you could go to the themes first, please. Very, very top, very button, sorry, very button, very button. If you go slow down themes, just to make, and I always use a flat theme. I think it's nicer, but that's just preference. And for you going through all of this, I guess you want to add the terminal or increase the font of the terminal. That's why you're heading, right? So terminal, you go font size and you can see how this changes the font down. That's nice. And if you want to change, if you use the editor by any chance, you also have the editors font. If you scroll down a little bit, that one, the font size, this is the one for the editor that is included. Yeah. And that's basically it. You had one command in your PDF to delete all the docker images because as of now you have like 800 megabytes free only if you run this docker remove images command or you can do docker system prune. One day I learned how to type. Can you do docker system? Is it system prune? Prune? Yeah, but with the blank in between. System and prune. Yeah. Yeah. This will remove all dangling images. Should have the same. Oh, docker images. Now you need to remove the docker images minus Q. Oh, minus Q without the Ls. Without the Ls. It's either image Ls or images. That's the old way. Minus Q and then put it this. Yeah, right. And now you've got I think three gigabytes, something like that. Yeah. That is something we can work with. Has everybody managed to get that far? If somebody could watch the chat because I don't see it. Assuming everybody is exactly at that stage, now we've got a bit of space to actually work with. As I said, there are a few things we require. One of them is the Singularity Easy Build GitHub page, which I set up. And I've got, I only need to find it. Give me a second, please. I've got for myself a little, because all these commands are, well, as you've noticed, you very quickly mistyped them. So I'm lazy. I'm just doing copy paste. So you can copy and paste the stuff from the PDF file, for example. So the first thing you do is you simply clone the Singularity Easy Build GitHub page. You then will see the folder from the GitHub. So if we go inside, there are two folders which are of interest. One of them is scripts. The other one is definitions. Definitions is where I uploaded already files which I've used to build Singularity containers. This is simply for sharing it. The other ones, which are a little bit more interesting are scripts. So if you go into scripts, there you will see a number of already prepared Bash scripts which are creating the Singularity definition file. And that file will be then used to basically build the Singularity container. Now, as we are restricted in time, what I'm going to do is I'm showing it to you how to do that, how to build the container, because that takes some time. And then I'm doing a little bit more explanation. So what you probably need is you need the path. You can either copy these files into your bin account, or you can set your path accordingly. I just leave it where it is. I just take a copy of that one, because that makes my life a little bit easier, hopefully. I'm going back to the root directory, make the Singularity. I do apologize for any typos I'm making. And now I'm executing that script. And we are going for this one here. That will build a container using CentOS 7. It is using the environment modules and it is using Python 3, which comes with CentOS 7. That Python is only used to install EasyBuild. If you want to use a different Python, you can build it in EasyBuild. And we need to tell it what to build. And here we only need the name of the file. And I'll tell you in a moment where to get that from. In our case, I suggested bzip2-106.eb. It is telling you you can actually have a Singularity definition file. That might be quite handy if you are publishing that and you want to include, for example, author email, you can put in more stuff in here. That's not a problem. It is also asking you, do you want to have a second EasyBuild recipe? So if you're writing your own EasyBuild recipes or configuration files or you want to have it from, for example, the development repository, that would be the point where you say yes, and you provide the file, not the file name, the actual file. You don't have it, so we say no. And that means we have our Singularity file here. That is a definition file. I'll show you in a moment how that works. What we probably need to do is, we need to install Singularity, sudo yum install Singularity. So we've got sudo access here and that doesn't take too long. We've got sudo access here, so that allows us to do things you usually can't do on a cluster. Now we've got our definition file. Now we need to build the Singularity container. This is where all the fun starts. And it is actually quite simple, sudo. When we build the Singularity container, we need to be sudo. Singularity build because we want to build something. And the name of the image, I usually, and I'll scroll up a little bit, use just this bit here, copy, I'm lazy, paste, si, Singularity image file, and we are using that. And that should now start and rattle on and does something. What I'm doing now is, instead of sharing my browser, I'm sharing the whole screen. So give me a second, I need to stop share and share screen. And how can I get the whole browser? No, I'll mix it up. Whilst, pardon me, whilst that is rattling on in the background, let's have a look at the Singularity definition file. Right at the top here, this is basically where you install a very basic image which can be booted. As you can see, as we are using CentOS, it is yum, that is the version, we are using seven, that is the mirror, just leave it as it is, and we include yum. The next thing is the post installation, all of that is all post installation. What we are doing is, you're updating the image first. Reason for that is, the image might be, the packages might be a little bit older, there might be some security problems. So we are updating it to have, to make sure we are having the latest installation. Then we install the packages which are required. Some of them are not required for your particular installation. So I don't think that vzip 2, for example, requires the lip verbs. I try to build these scripts as, how to put it, as broad as possible. So if you are a little bit more of an expert and you know what you're doing, feel free to modify that. The next bit here is we are installing vzip build using pip. Here we simply check, has the user easy build created? If not, all of that bit here is basically configure easy build. Here we are setting up the actual script. So we are creating a script in the home folder of the user easy build. And then here we change the permissions and here we are executing it. That bit is what I'm explaining in the slides as well. We are tidying up. And down here, all of that is basically then the post installation and so on. So setting up the environment for the, for the single gravity file. Important bit, I have to come back to that later here is which module will be loaded inside the single gravity container. And here you see the labels. So one of the label, for example, is the author. I omitted my first name because it's complicated enough as it is. And the email address. And here, for example, it is BZ2. So it's very simple. It's a text file. Once you get to know to it by all means, feel free to change it. One thing you might want to change is that one here. Export easy build parallel equals four. It means it is using only four cores for the build. If you've got more resources at hand, all means use it. If you've got less, you might want to reduce it. So I'm stopping sharing that screen now and go back to the browser because we are already done here. That is what I was looking for. So as you can see, if I scroll up a little bit, it was doing everything. So it was basically installing all these packages here, setting up easy build, doing that bit here. That bit here is basically, as you can see, it is modifying the bash RC of the user. And it's adding that bit here. We need that later. You will see in a moment why it is quite nice to have the eb minus, minus fetch means we are downloading all of the files first. And here we are building it. Now you might want to say, oh, hang on a second, but I've got more than one package to build. Do I not have to use robot and as it is not l mod? Do I have to? The answer is yes, you have. But what we are doing is we are setting an alias up here. That way you don't have to remember it because it's all done for you. For the more experts, that means you have to remember that. There is an alias set that might lead a little bit into problems if you're doing specific things. But then you are the expert. I'm pretty sure you will find out what it's doing here. It's basically, it is downloading the file. Then it stops. And then basically it would download the next table. Here is it too. It's only one table, very easy. Then it's basically processing the downloaded file. And the reason why we download first and then basically go into building it, if you want to install R, for example, and you download, build, download, build. I don't know how many packages and dependencies. So you're nearly done with your build. And then one of the download fails, not the build, the download fails because there's an internet problem. So you spend four hours on your build and then everything is just, well, start again. So if it fails for a download, it fails in the first, say, 10 minutes, however fast your internet is. That is not nice, but you can live with it. If it fails after four hours, I'm pretty sure you are a little bit annoyed. So what do we have? That's the more important bit. We've got our Singularity container. To run it, here we do not need Zudo, Singularity, Run, PCit2. And now what we want to run, PCit2, help. And it is printing out the version. Now you might say, okay, hang on a second, ignore that alias pass at the top. I've got no idea where that is coming from. Now you might say, yeah, but that's the PCit2 from the system. Let's check. And you can see that is a complete different version. System version is from 2019. The version reinstalled and I deliberately took an older one is from 2010. So that is one way if you say, okay, I want to have R. I don't want to do all of that myself and to get the dependencies and God knows what, you can do that. Next thing is how do you then unpack the container? So what you can do, again, we've got our container here. And what we can do is, as I said, we can unpack it. Here we need Zudo again, build, hyphen hyphen, send box. That basically gives you a shrewd environment. And I'm naming that Zmake3121 hyphen and I copy the rest because that is working the way I wanted. And that image is based on my single gravity container. So that takes a moment. Basically what we are doing is we literally unpack the container and if you do an LS, then you will see we've got a new directory called Zmake3 and so on. And now we can go into that directory and now we can install software inside that container. Again, that needs to be done with Zudo. Command here is shell minus W for write. And then you've got Zmake. Yeah, okay. If you do who am I, that should come up as root. We do not install software as root, not with easy build. So we do SU minus L to get the login. And now if you do who am I, that is me. And because we've got the environment now, if you do module AV, it is telling you there's only one module available. And that is bzip, but we've got now easy build installed. So what we can do is we can go ahead and we can say eb. And again, that takes a moment. See, I think the spelling is like that. Minus three, three, one dot eb. If not, it will tell me. Yeah. And again, it is doing it all automatically. It is fetching the files first. And I looked into that before. Okay. In that case, it is because I didn't do the fetch command. It is not fetching it first. That was my wrong now. It is basically downloading the file, processing it, installing. In that case, it is in cursors. And then it is moving on to download CMake. Once it is downloaded, it is then basically processing it and it's building CMake. Now that takes a moment. You might ask, where do I get the files from? Where do I get all these file names from? Now they don't fall from the sky. They are all on GitHub. So if you go to EasyBuild, and that link is actually in the tutorial pages, if you go to that link, it is basically the EasyBuild GitHub, you want to go to the EasyBuild EasyConfig tree. As you can see up here, EasyBuild EasyConfig. And in the master, these are the file names you need. If you want to have a different branch, for example, the development branch, for whatever reason, you can switch it. I'm just randomly picking R and then you've got R, for example. And then you've got the file names here. So all you need to know is basically the name of the file. For example, you want to install one of the latest versions of R, which would be one of these ones here, 344. And here you, for example, want to say, okay, we want to have the free and open source software chain. And because I want to have pictures, it also includes all the X11 stuff I need. You also can use the Intel compiler. As you can see, there are some other possibilities. All of that is doable. Going back to my installation here. So as you can see, it has downloaded CMake, the tarball. It is configuring it. And it is now building it. As we've got a moment, are there any questions if somebody could? Yeah. So for questions, make sure you raise your hand and zoom. You won't be able to unmute yourself. Or ask them in the Slack channel, either the Workshop channel or EOM. And we will ask them through York. Maxime has a question. Yeah, Maxime. Hi. Can you hear me? Yes, I can. Yes. So that's nice. I'm wondering in what context. So in an HPC cluster, typically SysAdmin or AnalystWit will install software and users you use the module that is there. In what context do you find having EasyBuild install software inside a singularity container? In what context do your users use that? I'm using it as a SysAdmin. I'm using it if a user wants to have older software installed and where you've got a different GCC compiler, for example. And don't quote me on that, please. It is easier for me to do it this way. Instead of working around, okay, we've got GCC seven installed. If the software requires six, how do I solve the problem? I simply have a GCC six singularity container, for example, opening it up, do exactly what I'm doing here, packing it again and say to the user here it is. Equally, I've gave one tutorial and showed the user exactly what we are doing here. So if a user, for example, says, okay, I want to have the latest version of R, R4, and we only have R33, whatever it is installed on the cluster, I tell you what, instead of raising a ticket and wait on simply doing it myself and put it in a singularity container and I know how to do it and that's it. That's one aspect. So you can say, okay, we've got predefined software, where we've got an EasyBuild configuration script. The other one is, of course, I'm working in the bioinformatics sector. And here we are building pipelines, which means you get your sequence in, for example, from COVID-19 and you want to know whether that is now COVID-19 or whether it is a flu or a cold. Patient is downstairs and really wants to know what is going on here and is frightening. So you can't wait forever. So here we can provide to the clinician singularity container and say, okay, all you need to do is run that script, which in the background basically is using the singularity container where we've got the pipeline installed and this is your outcome. And because it is singularity, it is running on the number of platforms, so we can run it on our cluster. We can run it on a local Linux machine, install it once and forget about it. Does that answer your question? Yeah, it does. So that's done now, which is quite good. So if we are doing now module AV again, here we've got our BZ2 and our incurses one. So that is quite good. There's one more thing we need to do and we should not forget that. We become root again by simply exit. We go to the singularity container root directory and there is a file called environment. There was one reason why I've done that because I need to copy that. If you open up yum install vim, I'm a VI person, so you have to apologize. If somebody wants to use nano picco at Emacs, I'm not having any opinion here. If you're opening up that environment file, you see currently it is loading the BZ2 module, but that's not what we want. So we just change that over. It is working now. No, I hate that. We change that over to the module we want to load. Getting out of it, getting out of the container. So now we have to pack it again and again that takes a moment. So I start off and then I rattle on sudo singularity build and here we can say, sorry, so one day I learned that. I'm used to a different terminal. That's the problem. So sudo singularity build the name of the container and what it is based on. In this case, it is my shrewd environment. So let's hope that is working. And it does basically, in some respect, the same as it has done before, but instead of using a singularity definition file, it is actually using my shrewd environment. So that is a very good way if you want to install software, which is, for example, not part of the easy build software stack yet, because it's either to new or it is too specific and nobody has actually created an easy build config file for it, or it is not one piece of software. It is a whole pipeline. So if you are a bioinformatician and you want to develop software, that might be, for example, a very good idea, because you've got the latest tools, which might not be around on your cluster. So these containers give you quite a bit more flexibility. And on top of it, my understanding is I haven't tried it with windows. I'll put my hand up here, but it should run under windows as well. It's definitely Linux, as you can see, and it does run under Mac. If you want to build a container under windows and Mac, there is the virtual nth, no. There is a project, which allows you to do it, which is basically... Vacance. Vacance, thank you, which is basically utilizing the Oracle virtual machine, but you can do that all command line. So you don't have an X interface. You can do that all command line. So if you've got a Debian desktop, for example, and you want to build CentOS, I've got the CentOS container. So I just move into the container and build my Singularity container. And Vacant is also providing a Singularity container. I'm not quite sure how up to date that is, because I'm doing things differently. So you can run, for example, a Debian Singularity container on a CentOS machine or on Mac and so on. We've got our container, Hooray. Here it is. So Singularity, Run, CMake, SIF, CMake, do something different version. That's interesting. Okay, it prints out after the error message. That is CMake version 331. Let's see if the help is working a little bit better. So it is generating the help as you would expect. And hopefully it doesn't tell you about version up here. Sometimes it does. Again, if you want to say, okay, that is now the one which is installed on your cluster. Again, I'm doing the version one. And if I type CMake, it's telling me it is not installed. So just to demonstrate, we are actually using the CMake inside your cluster. Now in the tutorial, I think right at the end, I've got an exercise that is something for you. If you want to play around, I provide a Debian container just to demonstrate that. I provide a Debian container, which is in one of the S3 buckets. There is a very long string with a small font because I wanted to have it as one string. If you do the AWS S3 copy and then that whole URL, space dot, it is copying the container over. Be careful that container is not exactly small. I think it's about 800 megabytes. So it's not a small container. But basically that gives you GCC 930 with the environment modules. And it is a Debian distribution. So you can basically unpack the container if you've got enough space. So you might want to delete a few things. You can unpack the container, bear in mind, when you are unpacking the container, because the container is compressed, it will get larger. So you need to bear that in mind. You can unpack the container, you can go into the container and you can play around. You can install software, you can see what is around. You might be able to pack it up again, depending on the disk space. There are instructions, as I said, in the slides and I hope all of that makes sense. I'm over time, but I'm hoping there are still a few moments before Kenneth is turning me off for questions, if there are any. Sure, still have time for some questions. Now let Kapil on mute, he has a question. Actually I was trying to follow it along and at some point I got a fatal error while trying to make the CMake container writable and I'm not able to get out of that. So I stopped following. Yeah. Do you know what's going on? Without seeing what you've done, it's difficult. Maybe Kapil, share the error message in the Slack channel, if you can. Okay, I actually put that in the chat. Okay. Good. I stopped sharing now so I can see the chat. Well, we will follow up there. That's easier to have some back and forth there. Yeah. Shall we put that into the Slack channel, if you don't mind? Yeah, I will try that. Just copy the error over and I will be around. Obviously I want to listen to some of the talks, but I will be around and then I can put an open image. It looks like a typo, I think. There's two more questions, Yorke. Yeah. So, Sebastian, are those first? Let him unmute. Yes, thanks. I was wondering if you start to really build for your bio application about the sizing of your singularity image because right now we have installed nothing. Nothing with the tool chain or whatever and we're already consuming 330 megabytes. So, I was wondering in general for your singularity image if you start to have some stronger software environment that requires a full tool chain, etc. How you deal with that? Unfortunately, the containers are getting quite big then. That is something I'm perfectly aware of. And they can get up to a gig, over a gig, if you're installing R. So, they are getting quite fat. I think that is the difference between singularity and Docker. In singularity you've got a full-fledged environment. In Docker you've got a more stripped-down environment. So, there might be many things inside of the singularity container you strictly speaking might not need. But the flip side of that is, of course, it is portable. So, it's one of them, I don't know. So, yes, they can get quite big. Absolutely no question about that. It's just the way it is. So, most of the, just to give you a few numbers, most of the FOSS containers I'm having, FOSS 2019A with CentOS7, for example, is 948 megabyte. And as I've mentioned R, so often, yeah, they are 400 built with FOSS 2020A. NfMod for DB9 is 2.6 gig. So, you very easily can go up to 3 gig here. So, yes, they are big. But maybe that's more for Kenneth. Maybe it would make sense for every tool chain, maybe to provide a base image singularity that contains the initial tool chain because then it's up to build anything on top. We actually have some of those already. Yeah, but we created some, it's not the very latest easy build version and probably not the latest tool chain version. But we took that approach for the easy build tutorial. And I will put the link in the Slack channel for that. So there, we were also using basically the same environment that the org is playing in the Cloud9 environment. And we were actually letting people, I think it was Docker not singularity. But we created Docker images with the pre-installed software stack in them. That, for example, had, I think, a full tool chain, so MPI and FFTW and all of that already in it. And then they were building actual proper software with a proper tool chain on top of that. Currently, we don't have a collection of tool chains, a collection of containers for different tool chains or anything like that. It's possible, but yeah, somebody will have to actively follow up on that. Or we would have to make that fully automatic somehow and have enough resources to build those containers. There's options and GitHub has its own container registry now. So rather than using something like Docker Hub, we will probably park containers there since most of what we do is in GitHub. But that needs to be backed by sufficient resources to build those containers. That's not something you can do in GitHub actions, in the CI environment, for example. You only get two cores and you get, I don't know, half an hour or an hour to run tests in. And there's no way we can build GCC or FFTW with two cores in an hour. That's just not going to work. So if there's a lot of interest in that, yeah. As a community, we could definitely take a look at that and... Yeah, I would. Sorry, one question. You initiate that also with environment modules, ZOTCL C version or L-MOD? Oh, I would have to check what we did. I think we were using L-MOD there, but I'm not sure. But you can do both. And from a user point of view, there's not a big difference as long as you do basic stuff. Okay, thanks. Yeah, the scripts I'm providing are using L-MOD because it's easier for easy build. It's more leaned in that direction. Unfortunately, on the cluster, we are using the environment modules. And when I first started using it and using L-MOD on our cluster, even though it is a container, the way our cluster is set up, something was leaking either into the container or out of the container and the modules, they're not quite happy. So that is the reason I switched over to use the environment modules. I see there's lots more questions. Since we want to prepare for the next talk, get Shazab set up well in time. I suggest we move this to the breakout room. So let me create that and make sure... He was doing that. Ah, okay, go ahead. Okay, I'll let you do that. So we will set up the breakout room. So if you have follow-up questions for York, please jump in there and we will make sure you can unmute yourself and that York can answer your questions. And with that, we will end the session here and end the stream here so we can get set up for Shazab.