 Okay, next up, let's start with how to use Singularity. So for those of you who don't know what Singularity is, maybe you're familiar with Docker, the container engine. A lot of people love Docker and it's been really fantastic for containerizing jobs and isolating them, but Singularity offers some things on top of Docker, Docker, namely it's a lot more HPC friendly. Docker containers, as you know, have a system demon that runs the Docker demon and all communication happens with this root level demon and containers are launched by individual users which have permission to access this. This is not a great fit for a lot of HPCs and a lot of computing environments where there are a lot more security requirements and that such Singularity has been created. This allows you to run containers at the individual user level that has much more constrained security interface which is really nice for HPC administrators and system administrators in general. Galaxy has the ability to run containers run jobs in containers, specifically Singularity containers, Docker containers and others. We will set up Galaxy today and for the rest of the training use Singularity containers by default. Every job that we run in Galaxy on the Galaxy server will be launched through a Singularity container. This project builds on the Bio Containers project. So whenever the community builds Bioconda packages, these automatically get built into Bio Containers containers. So these are Docker and Singularity containers for every Bioconda package. This enables a lot better reproducibility. For instance, when you use Conda, every time you install a package you need to resolve the environment and there can be differences over time between packages, et cetera. With Singularity, you know for sure that this one container will never change forever. It is the same unit. And we're gonna use the Biocontainers project as well as Singularity to launch all of our Galaxy jobs extremely reproducibly. So we're gonna start up by installing Singularity. I'm going to copy this over and go back to our server. We're starting again, we've got our server running, all of our processes looking good. We've got Ansible running, everything looks good. So let's start by adding our new requirements to the requirements.yaml file. At the bottom of that, I'll add the Cybers Singularity role and the Golang role. Singularity is a Golang project and as such needs the Golang compiler for it because we have to compile it. Certain versions of operating systems do have Singularity prebuilt packages, but in order to make this tutorial generic for everyone, we'll not be using any prebuilt packages. Next, we need to install our roles of course. Then we need to configure some options for it. So we will set up some variables for Golang saying which location we want Golang to be installed in and we'll set some variables for Singularity, like which version of Singularity we want and where we can find the Golang binaries to compile it with. We're gonna put that in our groupfars Galaxy servers again and I'm just going to do that at the very bottom. And then we need to add the new roles to Galaxy.yaml. And note, we'll do this before Galaxy because it's dependency of the Galaxy server itself. There's a little bit of an asterisk there that I'll discuss in just a minute while it runs. Okay, so we've got our new role added. We've added something to the groupfars Galaxy servers, variables for that. We have added to our requirements file saying, okay, here's the new Golang and the new Singularity role. And then we've also added to the Galaxy.yaml. So I think we're ready to go. Just gonna run the playbook. And this will take a second to run in the background. Can see a lot of things being changed, looks good. The go SDK is being installed in slash OPT and then the Singularity packages are being built. And when it's done, we should be able to run the command Singularity run Docker hello world. So what this says is Singularity, it should run a container just like the Docker run command does. And we will be pulling this from Docker Hub. Singularity knows how to talk to Docker Hub and can pull containers and run them. While we're waiting for this, I'll just show you what all is in the OPT folder. There's certbot from yesterday. We additionally have this go folder which should be downloading the latest version of Golang. And the Singularity folder, which has a copy of the full Singularity source code. Okay, that's run and the playbook is going to continue to run in the background. So I'm going to continue on showing you how this works. So let's try Singularity run Docker hello world. Okay, so what happens here is it says converting OCI blombs to SIF format. OCI is the open container image format I believe and Singularity has to convert this into its own format in order to be able to run it. It needs to build the image a little bit, unpack some contents of the Docker container, and then it's able to run it. And you can see hello from Docker. This message shows your installation appears to be working correctly, but we're not using Docker. We're using slang Singularity. So this is great to see. And when you run this container again, you'll notice that it's using a cached image. So Singularity only needs to convert the Docker container the first time. Afterwards it can use a cached image. Okay, fantastic. Next we'll configure Galaxy to use Singularity. So let's do that now. We'll edit our group variables, galaxy servers. And we need to set up these dependency resolvers config and the containers resolver config file. And these will tell Galaxy how to resolve containers and which methods Galaxy should use for resolving dependencies. We also need to add this config files block here. So we have config templates. We're just going to add the config files block next to it. Okay, that looks good. It'll copy from these source files over to the configuration directory afterwards. So let's find out what needs to go in there. The files galaxy config directory may be new. So we'll create it if it hasn't been created already. And in there, we're going to set up no dependency resolvers. That's something very new that's possible. So Galaxy is able to resolve dependencies through a couple of different mechanisms, but the container resolvers are not listed in this file because they interact a little bit differently with Galaxy and Galaxy can sometimes resolve things in multiple ways and have them work together. So normally in the dependency versus resolver file you would list something like tool shed repositories or Galaxy packages or condor packages. And this time we don't want any of that. We don't want condo, we don't want anything. Galaxy is just going to resolve it through the container resolvers configuration. So we need to create this file. The files galaxy config container resolvers cons. And in there, we're going to paste these contents. So what this says is, first we will try and resolve it through an explicit singularity resolver. Does this image come directly from singularity? If that fails, can we find it from cached mold singularity? So cached mold singularity does the mulling process that Galaxy uses for resolving packages. In the Galaxy world, there are tools that sometimes have a single dependency, which is fantastic. We can just use the container that has only that dependency. But sometimes Galaxy tools need multiple dependencies like Python and Sam tools, something like this. And for those, there's no easy way for us to take container one and container two and overlay them somehow. So what we do instead is something called mulling. We take a list of all of the dependencies. We generate a hash, a reproducible identifier from all of those dependency strings. And then we build a container with all of those packages included. And this is called a mold container or a mold repository. We'll next look then for the cached mold singularity container because this needs to happen on demand. If that doesn't work, we'll check for the mold singularity container. So is there another container out there that has all of those requirements for us? And if that fails, we can also build the mold singularity container taking an empty container and installing all of the dependencies ourselves. This gives us a good level of failover to make sure that things should work. Lastly, we need to update our galaxy job config. We're going to update this local destination to have a couple of variables that are required for setting up singularity. We're going to delete the local destination and install the new singularity based one. It should look about like this. So we've got our local destination with the runner local plugin, which should match this. The default destination should match here and here. And we're going to set this new parameter, singularity enabled equals true on this destination. And this will say to singularity or to galaxy, this is a singularity job, it needs to run in singularity. We're going to also set a couple of environment variables using the galaxy job configuration mechanism. We're going to set LC all equals C. So for those of you who work on operating systems that are in languages other than English, sometimes software isn't prepared for that. And so by default, we're going to set all of the language operations for all of the software we'll run for our of our tools to C. This is a level below one of the localized options like English. This is just ASCII very basic organization. This is not so important, but it's good for consistency. Next, we'll set the singularity caster. So this is the directory that holds the containers that get converted. You can clear this out if you need more space and new containers will be converted as needed. And lastly, we'll set the singularity tempterm, which is used to build part of its file system that it needs. So with that, that looks good and we'll be ready to run the playbook. So let's do that now. I'm going to pause the video very shortly and come back when it's done. Okay, it looks like everything completed successfully. So now we'll get on to the next part of this. Now that our playbook is run, Galaxy is configured to use singularity for the job configuration. It's configured to resolve the containers through the resolution methods we chose. We're going to install a tool and test it out. So let's go over to our Galaxy. You should be logged in as an administrator. If you haven't done so already, you'll need to do so now. And we're going to go to the admin menu up here and then we're going to install a tool. So find the tool management, install and uninstall. And then we're going to search for the tool mini map two. Find a couple of results. We want the first one here, mini map two. And down here, we can see a couple of different revisions that we might want to install. We're going to install the latest one. And I'm going to install it into a section called mapping as it is a mapper. You'll see it switches the cloning state shortly and then it should switch to installed. Okay. And go back to analyze data to get back to the actual tool itself. Map with mini map two is installed. Fantastic. So we're going to upload a quick test file. So this is the default way to upload files. There is a new way to upload files though, which your users might like to know about. You can go through browse data sets and upload directly into the tool form, which is super cool. For those of you who aren't Galaxy users, usually you have to upload files and if you forget one and you've made changes to your tool form, you have to go back, edit it, upload the files and then go back to your tool and re-edit it. So okay, we will use a built-in genome index. That's correct. Use the following data sets as reference. Yeah, it should be use a genome from history and build index. And we'll be using pasted entry one as the reference data set. We'll get this fixed in the training materials shortly. And we want to select fast queue dataset, the file we uploaded. So use genome from history, pasted entry, pasted entry. And with that, we're ready to execute. So we're going to go over here to the terminal and check out what's going on in Galaxy. You'll see that it's already working on running the job. We've actually missed all of the interesting parts. So I'm going to add the lines argument so I can show you a little bit of what happened before. So up here, we received the installation request down here. We actually started the job and it says, okay, we have this container. We want to resolve it. It's an explicit singularity container because it just has the mini map to command in it. It identified where the container is coming from. Checking with container resolver found this container that it can run. And then it starts running it. And we'll see over here that it was successful. It produced a file as we expect. If you look at the job information, you can see down here at the bottom that it was run in singularity, which is exactly what we wanted to see. So with this, and for the rest of the week, you will see that all of your jobs are resolved in singularity. And that's fantastic. So when you've completed this tutorial, please, please, please fill out the feedback form. We wanna know if you had any issues completing this or if you have any comments or concerns, just let us know what you thought about. Was it okay to follow and so on. Thank you.