 Hello everybody and welcome back to the Galaxy Administration course. In this session, we're going to be going through running remote jobs for Galaxy using Pulsar. My name is Simon Gladman and I'm one of the administrators of Galaxy Australia. I'm also a bioinformatician at Melbourne Bioinformatics at the University of Melbourne. Okay, to start off, we need to go to the training website. So the Galaxy training website, training.galaxyproject.org. And you'll see this, we're going to click on Galaxy Server Administration, which I'm sure you've already done three or four times at least this course. And we're going to scroll down the administration topics until we get to the one that says running jobs and remote resources with Pulsar. And we're going to click on the hands on. Okay, so a couple of questions first. Hopefully by the end of this, you'll understand how Pulsar works. You'll be able to install and configure RabbitMQ, a message queuing server. You'll be able to install and configure a Pulsar server on a remote machine. And you'll be able to get Galaxy to send jobs to that remote server. To some requirements before we move on. Hopefully you've all done the Ansible introduction, installation of Galaxy using Ansible, connecting Galaxy to a compute cluster, and you've also completed reference data with CDMFS. If you haven't completed all of these, I suggest you go back and do them in the order that they're listed here. Because without these prerequisites, this tutorial will not work. Okay, and the other thing you will name is a second virtual machine or a server or a second server or a virtual machine on which to deploy Pulsar and then have Galaxy send jobs to. Okay, so how are we going to do that is on the spreadsheet that lists all the training machines, which hopefully you all know a bit by now, and have access to this spreadsheet here. Yours will look a little bit different. This is the one we're using to record the video, so it only has a limited number of machines. However, the one that we're using for the course will have all the machines listed here. We'll see there's two tabs, there's VMs and there's Pulsar. Hopefully you have one assigned to you. One of these training machines assigned to you, which has Galaxy on it, and you've already gone through and installed Galaxy on it everywhere. But if you click on this Pulsar tab, you'll notice that there are a bunch of other machines and there'll be more than one listed here. At the moment is only one listed because it's me. So what I would like you to do is pick one of these machines that are listed on this tab and write your name next to one of the ones that's listed. Hopefully one that doesn't have a name on it already. So I'm going to select this one. So it has an IP address that has a DNS entry and it also has a password, and I'll assign myself to it. Okay, we don't really need to worry about this just yet, but it's always good to claim a virtual machine early on. Okay, so we'll go back to the tutorial tab. Okay, so hopefully you've all watched the video of the slide show and you have a little bit of understanding of what's going on with Pulsar. Basically, this is a bit of a recap. Pulsar is the Galaxy Project's remote job running system. It was written by John Chilton of the Galaxy Project and it is a little Python application that runs on a machine and talks to Galaxy. And it allows Galaxy to send it jobs and then Pulsar will run those jobs on behalf of Galaxy and then send the results back to Galaxy again, which is all pretty cool. And the best part is that these two machines, so the Galaxy server and the Pulsar server, don't need to be anywhere near one another. All they need to be able to do is talk to one another over a network or the internet. Okay, there's a lot of documentation here about overview, etc. So the overview is actually quite cool. You can see here what we're going to set up today. So basically here's your Galaxy machine on it. We've installed Galaxy and inside it we have a job confile and a Galaxy file system. And we've run jobs on it and we've played around with SloanQs and we've played around with reference data, etc, etc. So hopefully you know all about this little green box here. Well, so that Galaxy can talk to a Pulsar computer, Pulsar server, especially a Pulsar server that's configured in message queuing mode, which is the mode that we're going to use for this tutorial. We need to install another server program on our Galaxy server called RabbitNQ. And within that application we need to set up a queue so that Galaxy can send the queue a job and the Pulsar can monitor that queue to know when it has jobs. And so it acts kind of like an intermediary between the Galaxy server and the Pulsar server. So hopefully that makes a little bit of sense. So the way how it will exactly work is that Galaxy will send a message to the RabbitNQ server on the Pulsar server's particular queue. So this little queue here saying that there is a job to be run and then we'll monitor the queue for job status updates. So as the job gets processed Galaxy will be monitoring it and it will, you know, like all Galaxy jobs in our history will get a grey box to start with. Then it will turn yellow and then it will turn green when it's finished. The Pulsar server monitors the queue. So this Pulsar and NQ mode monitors the same queue and when a job appears it will take control of it. Pulsar server then reads in the metadata, etc. And downloads the required data from the Galaxy server using curl. So it pulls the input data out of the Galaxy file system into the remote file system using curl. Pulsar server will then install any tools that require tool dependencies using condo or in fact singularity. And then the Pulsar server will start running the job using its local mechanism and will send a message to the queue, this queue here stating that the job has started. Once the job is finished running Pulsar server will send a message to the queue stating that the job has finished this queue here again. And Galaxy will have picked that up and will change the colours in history. And then Pulsar will then send the output data and the output metadata back to Galaxy server by using curl again. And then the Galaxy server will acknowledge the job status and then close the job on the queue here. Some notes just before we move on. RabbitMQ uses the advanced message queuing protocol. Transport of files and metadata occur via curl from the Pulsar end in this tutorial, but you can use other file transport methods and they're located in this tip box here. RabbitMQ is written in Erlang and really doesn't add much overhead to the Galaxy VM. Although in larger installations such as Galaxy Europe or Galaxy Main or Galaxy Australia, we have a separate server to run RabbitMQ for us. And you may have heard in my slide show that there are a number of different ways of configuring Pulsar and we can configure Pulsar in MQ mode or message queuing mode or the RESTful mode. And if you're interested in why we're not using the RESTful interface here, it's a bit of an explanation here. Okay, let's move on with the first part of this tutorial. And the first part is we need to install and configure a message queuing system into our Galaxy server VM. So we're going to install and configure RabbitMQ, which is an AMQP server application onto our Galaxy server VM. And today we're going to use a role, a slightly modified version of a role that was written by Jason Royal to install RabbitMQ and it's currently being hosted by Galaxy Europe. So hopefully you understand all about Ansible and you understand what's going on here. So I'm going to switch over to my shell here. I'm going to log into my machine. So I'm going to go to SSH. I'm going to get 14. Get 14. I'm pretty sure that's my one. Get 14. I'll use my password. I'll copy my password in. Okay, so I'm logged in to that machine. I might just make it a little bit bigger. It's making it easier for people to see. Alright, so I can see what we have in our root directory here and my user root directory. So I'm going to go to the Galaxy folder and you'll know all about everything that's in here. We have our requirements file, Ansible config, our Galaxy YAML playbook, our hosts file, our groupbars file, our groupbars directory, our roles directory. And a template directory. Okay, so we'll go back to here. The first thing we need to do is we need to install a couple of roles. Today for this tutorial, we're going to be using the useGalaxyEU RabbitNQ role. And we're also going to be using the GalaxyProject.pulsar role. So firstly, we need to basically copy all of this and put it into our requirements.yaml file. I'll open it in the editor and paste. And you can see now we have that in our requirements.yaml file. I will now close it. And I will install those roles into our roles folder using the usual command. Ansible Galaxy install minus p roles minus r requirements.yaml. So I'll copy that and paste it here. And you can see we're downloading two roles and that's complete. Now if we look in our roles directory, you can see here that we have useGalaxyEU RabbitNQ and GalaxyProject.pulsar. And they're the ones that we've put in here today. Great. So we'll go back to the directory above. Okay. So now we're going to go and configure RabbitNQ. And there's a bunch of different things we need to do after we install Rabbit. We need to tell it about who's allowed to use it. We need to give it some user names. And we also need to develop some virtual hosts. And this is Rabbit's way, virtual host, so Rabbit's way of defining the broad queue group. So a group of queues. And so different users have access to different virtual hosts. So we're going to set up the virtual host for the Galaxy server. And we are going to allow a particular user to use that virtual host and communicate to Rabbit and add messages or remove messages from those queues. All right. One of the things that's really important here is when Rabbit needs to be able to communicate via a network on a particular port. And so we need to make sure that the RabbitMQ has a port open in our firewall for our Galaxy server. And the default port number for that is 5671. So all we need to do is allow Rabbit to listen on 0.0.0.0.5671. For a local host, we can define a different port for it to listen on. And in fact, for local host connections, we can set it to 5672. For our internet access, we set it to 5671. And we need to open that port in our firewall, which we've already done on all of the training machines. Okay. So let's actually get on with configuring RabbitMQ now. As you can see in the tutorial, the first thing it tells us to do is to edit the group pass all.yaml file. This is a file that is applied to all hosts when we run Ansible playbooks. And what we want to do is we want to add our RabbitMQ queue password into that, so that when we run Ansible on our Pulsar machine, they will also get the same password. So I'll go here. I am group pass all.yaml. And hopefully you should have CPUFS in here already. And I will copy that line and paste it here. Now I really, really want to change this. It's not a good idea to use the default password from a tutorial in anything. It's not very secure. So I will make up some long password. And I'll get rid of. And you can put anything you like in here. It doesn't need to be as long as it's long and it doesn't make any sense. It's fine. All right. So I'll close that. Now we need to edit the group pass GalaxyService.yaml file. And we need to add some sections in. In the Engine X, sorry, in the cert box section, we need to let Rabbit have access to the shared key. We also need to restart RabbitMQ server when we renew a cert pod. And then we need to do some RabbitMQ settings. I'm not sure how you've been doing in the past, whether or not you want to go through and copy this into a patch file and then run patch. I'd prefer to actually go through a line by line. So that's what I'm going to do. Variant group pass GalaxyService. Here we are. And we're in the cert pod section, which is down here, I believe, for mine. Cert pod. All right. So below Engine X, we need to add in RabbitMQ. Here we need to add in system restart. RabbitMQ server true. So let's kind of paste that. And then we need to do a whole bunch of stuff down the bottom of this file. We'll put it all the way down the bottom under the Slurm config that I have. All right. We want to add this whole section. Now we'll go through it quickly. We need to set an admin password. Once again, we'll change this to be something. We're going to specify the version of Rabbit that we're going to install. And that's fairly important. And that's because Rabbit's written in Erlang. And Erlang is very finicky as to which versions of Erlang will run on which operating systems. And so we'll need to be particular in the version of Rabbit that we install. And this one is, this one will work on Ubuntu 20.04, which is our current operating system. There's a table at the RabbitMQ website that specifies the versions for different operating systems and versions of Erlang. We also have the guys set up a RabbitMQ plugin. We're going to add the RabbitMQ management plugin, just in case we ever want to do some command line changes to RabbitMQ. And then we'll set up some config. We'll have a TCP listener on one local host at 5672. And then we'll have a secure shell listener over to the internet at 5671. And here are the SSL options that we will start. We will then set up some v-hosts. We'll set up one, we'll set up a v-host for Pulsar Galaxy AU. And then we will set up some users, so we'll set up a user admin, and we'll set up another user Galaxy AU. The admin will have access to the root, and Galaxy AU will have access to this new queue that we set up, this new v-host we set up before. And as you can see, we're going to be pulling the password out of our old .yml file for this. So I'm up to, I'll just grab all that and then fix it. And I will save that. Okay, and then I will update our Galaxy playbook to include useGalaxy AU RabbitMQ, and we want to put it just above Nginx. It's kind of important that it goes above Nginx. And if you click on the tip, it'll tell you that some of the things, there's a bit of a circular dependency problem going on, but we'll put it at the end. Galaxy, useGalaxy, it's called AU RabbitMQ. All right, so that's done. And now we run the playbook. So let's do that. We need to put the monitor AU about to, it doesn't say that here, but we need to do that for these machines. Hopefully you know this from the rest of the week. Oops. Did I not install it? No, double S. I made a mistake. There you go. And now we will run the playbook. We'll use the monitor AU about to here. Hopefully you've been doing this all week. If not, I need to do it for this machine. You know, sometimes Erlang can take quite a while to install. And also Rabbit can take a while to install. Erlang especially. And so I will probably pause the video recording while Erlang is installing just so you're not sitting there looking at a bunch of dead air for a long time and try to keep the file size down a bit when we get to that part. So yours may take longer than mine. All right. So I'm going to pause the video for a little bit. This is just because Erlang takes five minutes or so to install. It's finished. Okay. So now it's importing the RabbitMQ. I'm paying you to install it. You shouldn't take too long. There we go. It's done. It's adding the hosts, adding the users. It removes the guest user because we don't really want to have a guest user. I need to change a different long password. Never mind. I can go back and change that in a minute. Okay. So now hopefully we have RabbitMQ running. And let's have a look and see if it's working. It says system CTL status RabbitMQ server. So let's do that. System CTL status RabbitMQ server. And it's running. That's pretty cool. Very nice. Let's see if we can have all the interfaces running. We'll do a diagnostic status. We'll see what happens. Yep. We have everything running. Fantastic. That's really good. Just to make sure we'll do a quick curl. So let's do a curl to localhost 5672. So curl local 5672. Yep. That worked. And we'll do the same thing to 5671 with a minus K. Yes. It's working. Beautiful. Okay. Now the next section of this tutorial is to install and configure Pulsar on a remote machine. So hopefully we have access to another machine. And if you go to the Pulsar tab, you'll see you should have a list of machines here and you have your name next to one as we discussed earlier in this tutorial. So this is the one that I'm going to be using for this tutorial. So that's pretty good. Okay. So we need to install and configure Pulsar on a remote machine. So we need to create a new answerable playbook for this one. We don't really want to run the Galaxy Playbook on the new machine. We want to run a different playbook. So we're going to create a new playbook. We're going to use the Galaxy project dot Pulsar answerable role to install Pulsar and configure it all for us. There's quite a lot of information about some of the different dependencies and environments and users and configurations that we can set. And if you are interested, you can read all of this. We will be setting it up so that Pulsar will automatically install tools for us using Condor. And we will need to know the fully qualified domain name or the IP address of the Galaxy server that we intend to ... ... ... ... ... ... ... ... ... ... ... ... ... We need to create a new group variable style. And this one is going to be called Pulsar servers. So let's do that. ... ... ... ... ... ... ... ... Pulsar ... ... Okay. And we want to put all this stuff in here and I'll copy it in first and then we will go through it bit by bit. Okay, oops, yeah, okay. So in the top part here where it says GalaxyServe URL, we need to put in the actual fully qualified domain name of our GalaxyServe there. And as you may remember, mine is GAT 14. So I'm going to copy that and I'm gonna paste that here. Okay, the next thing we're going to install Pulsar on the remote machine on our Pulsar machine to slash mount slash Pulsar, just so we know where it is. We're going to use PIP to install it. We need some libraries so we can do SSL. We want to set it up using system D and we want to use the web-less version of Pulsar. Basically what this means, we're going to use the message queuing version and not the REST API. We need to install some dependencies. We're going to install PyOpenSSL, PyCurl, Drama, so we can use Slendermorl if we want to. We're going to install Combo, which is part of the message queuing system and some utilities. We'll then have to set some config for Pulsar. We're going to auto initialize condor so that condor will be installed on the first run of Pulsar and we will auto install tools as required on our Pulsars. In production, after you've got most of the tools installed, you probably want to turn this off just in case you have 50 users at once or try to run a new tool on Pulsar and then they'll have 50 concurrent installs happening. You may want to turn that to false and manually run the condor, create this. But yeah, for now, we'll leave it to true. We'll set our staging directory, our persistence directory, tool dependency directory. Now this is the important bit here. We're going to set our message queue URL and you can see here that we are going to use a special protocol called Python AMQP. Python AMQP, we're going to have the user Galaxy AU. We're going to grab our password out of our old .yaml file. We're going to use our Galaxy server URL which we've set at the top of this file. And then we're going back to the port 5671 and we're going to look for the local host, so the V host of Pulsar Galaxy AU and we're going to set SSL to equal one. All right, now the really important thing here is this double slash here. We don't put in the double slash, it won't work. That's because when we defined the V host, we had a slash in the front. Okay, we're going to set some AMQ protocol settings about the polling interval, whether we're going to retry the published stuff after we've finished. And then we are going to create a dependency resolvers file and we're going to add in conda into that so that Pulsar will automatically, Pulsar will automatically create conda environments for us. All right, so that's done. So we'll save that, go back to our configuration here. If you want to run non-conda tools, you'll have to manually install them. It's quite complicated. The best thing to do is this, to use conda or singularity. There's some documentation on how to set up singularity if you want to later. Okay, now we need to add something to our host file so we know, Ansible knows where to actually go and find our thing. So I'm going to copy that and I will go to make a new file. Oh, sorry. We're going to edit our host file. All right, so down here, I'm going to create a new group. We're going to call it Pulsar servers and my Pulsar server is this one, get18 and it also says here that we need to set the Ansible user to the user name for what we're going with, which you'll be Ubuntu. So we need to say Ansible user equal to Ubuntu. Okay, so we can finish that. And now we need to create a new playbook. So this playbook is going to look very similar to the one that we created for our Galaxy server. It just has a lot less stuff in it. And in the beginning, we're going to have a pretask where we install some of the requirements we need for Pulsar. So we're going to copy that. Playbook Pulsar.yml, paste that in. So you can see we're going to operate on the host Pulsar servers, which is only one of. We're going to install some packages in the beginning. We're going to make sure that Ubuntu has got its build stuff. It's got Git, it's got Python 3, the dev libraries, it's got loop curl and loop curl open SSL version. We've got loop SSL there, dev, and we've got virtual end installed. And then we are going to run the CVMFS role. And you think, why are we running the CVMFS role? Well, we're going to run the CVMFS role on our Pulsar server so that the Pulsar server has access to the same reference data that our Galaxy server does. And that's pretty cool. That means that we can get our Pulsar server to run mapping jobs with BWA and have access to all the BWA references automatically, just like our Galaxy server does, which is actually pretty cool. And then finally we'll run the Pulsar role. That's pretty much all we have to do. All right, so I will save that. And then we need to run the playbook. Ansible playbook, Pulsar.yml. Now, you remember this time, we're not actually running the local host. We're going to be running on a other machine, which is remote to us now. We are gathering facts. So we're installing some packages. If we want to see what's going on, I can log into this machine. We go to another shell here. This is H, trying to add. And the password is this. And now I'm logging through our remote one. So I run top. You can see that it's running up to get at the moment, which is pretty cool. All right, in fact, it's got quite a long way. It's got installs and packages. It's installing a CVMFS as we're speaking, which is awesome. That won't take too long. Oh no, here we go. There it goes. Checking it for setup now. All right, it's creating a Pulsar user, installing pip and installing Pulsar from pip. Now it's installing some Pulsar dependencies, all the ones that we specified. So PYIP and SSL and Combo, et cetera. I can config files. All right, and it's done. Okay, so there we go. So now we've just run an Ansible script to install Pulsar on a remote machine. So not get 14, we've installed it on get 18. So now we're gonna look into that machine. We're gonna have a look in the mount Pulsar directory. So I've already logged into it, which is nice, I've got top running. Get rid of that. And we'll have a look. We'll see cd slash mount slash Pulsar. You can see here, we have four directories, config, depths, files, and bend. In the config, there are the config files. So app.yml, which is the one that determines how Pulsar runs. Dependency Resolves, which tells Pulsar to use conda, and then some other bits and pieces. The depths is where conda will be installed. And inside files, there is persisted data and staging. So whenever Pulsar pulls data from Galaxy, it will put it into the staging directory. And every time Pulsar runs a job, the job working directory for that particular job will be put inside the staging directory here. And it will be done by job number. And then obviously we have a virtual environment for Pulsar. All right, so let's have a look and see if Pulsar is running. And it is, there you go. So it's currently installing conda because we asked it to do that automatically if you remember. And here you can see now that we have actually installed, oh, actually we're talking to the AMQP server. So you can see here that we're talking to it at that location and the heartbeats are all running. So excellent, that's working. Okay, so now we have Pulsar installed and running on our remote machine. The next thing we wanna do is we wanna tell Galaxy about our new Pulsar machine. So we've installed Rabbit on our Galaxy machine but that's got nothing really to do with Galaxy. And we've installed Pulsar on our remote machine and we've got Pulsar to talk to our Rabbit server. But now we need to tell Galaxy about the Pulsar machine and the Rabbit server. So we need to do a couple of different things. We need to change the job profile. Mostly we need to change the job profile and then we'll also need to tell Galaxy which tools to send to which jobs to send to Pulsar. And we mostly do that by tool. So we're gonna get it to run PWAM on the remote machines. All right, so the first thing we need to do is we need to add a whole bunch of stuff into the job comp. And so the way we're gonna do this is we will clear the screen. And then we will, PWAM templates, Galaxy config job comp and hopefully your job comp will look something like this. It may not look exactly like this. I haven't done all of the dynamic destination stuff that you guys probably have. All right, it says here in your, that file in the plugin section, we need to add a new plugin and this is our new plugin that we wanna add. So I'm gonna copy that. Sometimes the way the paste things is annoying. All right, so we have a new plugin. This one is of type Pulsar MQ job runner. And as you can see in this section, we give it the AMQP URL with the password. And this time we point to localhost because this rabbit server is actually running on the same as the Galaxy server. So you just point to localhost here but we'll still use SSL. And we're going to set the republished time to 1200 seconds. We'll acknowledge, we'll do some acknowledgments. These are all AMQP settings that we wanna set. Another one is really important here is we wanna use the Galaxy URL. And we're gonna use the inventory host name for that which is our fully qualified domain name. So this is the name in the file. It's already in our host file. So it'll be get14.os.train.galsy-project.u. So that'll get here. And then the manager. All right, so that's done. And now we need to add a new destination to use that plugin. So we go down to the end of destinations and we'll add a new one in here and we'll add in this destination. Okay, so that's the important ones here. We've made a new destination called Pulsar. We're using the Pulsar runner. We said the default file action is our remote transfer. So in other words, we want Pulsar to handle the file transfers. And the dependency resolution, we want Pulsar to handle the dependency resolutions as well. So that means that Pulsar will use Condor if it needs to. We need to tell Galaxy where the job directory is. So it's in the Mount Pulsar file staging and any persisted data will get put into Mount Pulsar files persisted data. We were not going to get Pulsar to do the metadata. We'll get all the metadata updates will be done by Galaxy. We will need to rewrite the parameters so that the Pulsar paths are set instead of the Galaxy parting. So in the command lines. And then we want, we're telling here, we're telling Pulsar to use the curl system for doing file transfers. Okay. And the last thing we need to do is we need to tell Galaxy which tools to send to Pulsar. And we're going to send BWA MAM jobs to it and BWA jobs to it. So within the tools section of the Job Confile, down here and you probably already have this. We're just going to add in these two lines here or in your case, change those two lines if they're already there. So we're going to put tool BWA and then we'll send BWA MAM there as well. And this basically tells Galaxy that every time the tool BWA is run, please use the destination Pulsar or BWA MAM is run and use the destination Pulsar. What is that? We've already installed BWA or I have at least. If you haven't already installed BWA, go to your Galaxy server. Go to the admin section, click install and uninstall. Let's search for BWA up here and click on it. And you'll see an install button here. If you haven't installed it, I have already installed it. And now we need to run the Galaxy Playbook to update the Job Config and restart Galaxy. So let's do that. That's all Playbook, Galaxy minus U you're going to. You can see here it's changed the Job Confile. And so it will restart Galaxy for us. Okay, so that's all completed. Now, if we still looking at the Pulsar log here, it might load the Galaxy log. So we'll do that with Journal CPL. So we've got that running. So now whenever we do something on the Galaxy system, we'll be able to see what's going on here. All right, so how do we go about testing them? All right, we want to go back to our Galaxy, we'll upload some files and we're going to run BWA mem and we're going to see what happens. We'll keep our fingers crossed while we're doing it. All right, so we want to grab these two fast queue files out of Zenodo. All right, so upload page fetch data, place those in and we'll call them fast queue singer. You can see here now uploads are running in our log using the Sloan runner. That's done. We'll have a look at one of them. Yep, that looks like a fast queue file to me. All right, so the next thing it says to do is to map with BWA mem against E coli. All right, let's do that. So we'll use BWA mem. So we'll go to mapping, BWA mem, we'll change this to Shorishi Echolai. Perl end, mutant R1 and mutant R2. All right, and that's it. Yep, all right, let's watch what happens without we press execute, execute. And you can see here, it's called the AMQP, MQP and it's published the setup. So now we'll go to here and you can see here that Pulsar has picked up the fact that there's a new job and it's also realized that it doesn't have a BWA installed. And so here it is, it's creating BWA and installing samples as well. All right, so hopefully when that's finished, it will run our job. So it's still installing, there we go, it's installed all that stuff. And look here now, our job's running because Pulsar said send the message to say that we're actually running the job, change the state to running. And hopefully it won't take too long to run. That's the Galaxy log I'm looking at here in the Pulsar log. And if you want to see what's going on on the Pulsar machine, I can control C out of this and type top, I see, and you can see here we're running, oh, it looks like it's finished. Oh, well, I know it's still going, Sam tools it's up to you now. That's cool and look at the log again and it's publishing a new status update, is it? Yeah, the job complete. So it's published it and we've got a green thing here. And if we look at a Galaxy log, you can see here that it's got a message back here saying the job's been complete and here we go. And so if we look at this file, you can see that it looks like a SAM file, but it's actually decompressing on the fly. So there you go. Nice one. That worked. Okay, so you think, yeah, big deal. But basically what we've done is we've told Galaxy to run a job on a totally different computer without having a shared file system or anything like that. We've told Galaxy to send a job off to a remote computer, send off the input files and the job metadata. The remote computer has then realized that it needs to install some tools, so it's just done that. And then it's run the job for Galaxy and then sent the results back to Galaxy again. And it's done this all over the internet, which is actually a pretty cool thing. Okay, so I wanna talk a little bit about pulsaring production before we finish today. For every different pulsar server, and so for Galaxy Australia, we currently run about four or five pulsar servers when we're adding more all the time. And for each one, we need a new V host and a different user for that V host for each new pulsar server. And then in the job conf, we need a new job runner with a connection string and a new destination for each pulsar server. And in fact, for a couple of my servers, I have four or five different destinations with different settings. In Europe, they do very similar things to what we do in Australia, except then it looks a bit more extensive. And they actually even run a pulsar server in Melbourne, down here in Australia, just to show that we can actually do this across the globe. And there's a whole bunch of documentation for the European setup, if you're interested by following this link. All right, that is the conclusion of this tutorial. I hope you got something out of it. I hope you enjoyed it. If you could, could you please fill in the feedback form at the bottom of this tutorial to help us improve our content, to tell us what you think was not very good and not very clear, or if you think it was awesome, please let us know that as well. And if you use this tutorial anywhere and you wish to cite it, then here are the links. Here are the citations for this thing. And yeah, thank you very much everyone for your time. And I hope you enjoy the rest of the course. Thank you and goodbye.