 Hi there, my name is Hugo Jiménez and today I want to present you this project developed here at the Barcelona Supercomputing Center. Okay, just a sec to put the slides. So this is a galaxy custom runner that uses a high group of computing back end to massively process Ascot 2 samples data. Just to make it clear, this project is still in active development. Okay, so right now we have an infrastructure which consists in a small team of workers, a slow cluster with 8 cores and 20 gigabytes of RAM for each worker. Both the workers and the Galaxy server has a common GBFS share file system. This setup is okay for regular tasks but not for processing. For instance, the non-stop increasing amount of Ascot 2 samples data. For the purpose, we thought on leveraging the server computer, my own nostrum, using it as a backend for the Galaxy server. However, one of the conditions for the Galaxy to use a computational resource as a backend is to share a common file system. And due to security measures, this is not possible in my nostrum. On top of that, there's no internet connection in the worker nodes. For these reasons, we thought of developing our own custom runner. And let us, other runners available at the Galaxy project, these custom runners makes it possible for the backend to have a different file system, since it synchronizes back and forth all inputs, outputs, and dependencies for the issue jobs. But of course, the running comes with some little constraints. For the time being, the only job scheduler support is a slur. However, adding support to other job schedulers is feasible. The authentication is based on SSH keys exchange. And modern nostrum requires a dedicated user to log in, which means a user for each of the Galaxy servers users. This is both for the login part as well as for the job queuing system. So just to let you know how the runner works, I'll quickly pass over the main steps. The first steps will be the key exchange. If this is the first time the user issues a job, the runner creates a pair of keys and deposit them in the user's library. Then the users have to copy the public key to its main Marunosrung account. Once authenticated, the runner sends the input as well as the software dependencies, if necessary, to user established folder in Marunosrung. And after sending all the necessary data, the runner schedules the job and starts monitoring it. And finally, if the job ends successfully, the runner retrieves the data and copy it back to the Galaxy established location for the dataset files. And that's it. So I want to thank the people at the Barcelona Supercomputing Center, Salvador, Josep Lluís, Laya, Jose Maria Jorab, the Fox of IMB, Alexi Respin, and the Instituto de Salud, Carlos III. And of course, Bjorn Groening and Wolfgang Mayer and other members of the Galaxy Europe team for helping me to integrate so much and learn about the Galaxy project. And of course, I just want to thank you for your attention.