 Welcome everybody and okay today we are here with this webinar to talk about galaxy resources for administrators and infrastructure providers. I'm Jamal Kukuru and today with me there is also the list. Okay, what, what are the goals of this webinar. We would like to give you some hints, some suggestions on where you can find resources, the presentation, the other materials to build your own galaxy instance for a multi user production environment. What does it mean multi user production band. It means that this webinar is not dedicated or not think it think it for developers for example that would like to test a tool on their notebooks or the last future is instead dedicated to anyone that need to build a galaxy server that should be used by others from your lab or other kind of public audience. And why do you want to do this. There are a couple reason that for for that, because you have some security requirements from your institution for example you can't. You are not allowed to to move back to move outside your institute of your data, or you have some specific computer storage requirements that are not satisfied by any public by any galaxy, any public galaxy instance that it's available right now. So, or you want to customize your instance in a specific way about tools or data libraries or data set or anything else. In any case for all these reasons and others, we are here to try to give you all the suggestion all the helps that we can to to reach your goal. By sketch for me, I'm a member of the European galaxy team from 2011. We are based in Freiburg in Germany. I made my first public galaxy instance in 2011 and I met the giz the galaxy community in Chicago at my first gcc in 2012. Today, I am the system administrator use galaxy dot you. That is one of the galaxy server of the use galaxy dot star network. We have today 30,000 registered users and more or less 3000 active user per month. This is a machine made by several boxes with a lot of course and a lot of RAM. We have a computing cluster with around 11,000 courses 60 terabytes of RAM and more or less three petabyte of storage distributed along several first server and also 16 interviews. We manage this this machine using several tools like Ansible, Jenkins, Terraform, GitHub, and others. And this is my full time job. I work all the day to to maintain to update to fix all the issues that are on use galaxy dot you. So long there are also other contribution from the other team members and also the users can contribute to to modify in some way is galaxy dot you, because anyone can if find something wrong on our configuration on our setup can simply open a pull request on our setup repository and after we review that pull request if it's fine for us we can merge it and that way everyone can contribute to it. As I told you today with me there is Lucille Lucille if you want to introduce yourself. Yeah, so hi I'm Lucille Lucille and I'm a postdoc in the native lab at EPFL so Losan in Switzerland. And in fact I met the galaxy community for the first time in 2019 and I fell in love with the community. I am responsible for really small private galaxy instance. So of course if you compare to Merrill it's really really small. So we have just 10 users. It's just for my lab so we know each other very well, we can go next door to say hey you're taking my space. We have 32 CPUs, 380 giga of RAM and 16 terabytes. I would say that it takes me roughly 10% of my time so one half time a week. Of course it depends on periods. So I was doing it mainly manually so with the graphical interface and then I decided that I should probably move to a more automatic way to deal with that. So I assisted to the admin training in 2020. And in March this year I decided to reinstall my instance from scratch so I literally erased the workstation and started keeping all the old histories but with a fresh new galaxy. So I will be able to maybe answer some of your questions and tell about my experience. Before I start really with the presentation I had another introductory slide about some links that are in the slides. I would like to summarize them here because I don't need to read every time the URL in the slides. So the first link will be about the Galaxy Community Hub that it's a website where you can find all the details, all the links that regard anything about the Galaxy community. Probably there you don't have any specific detail of the future Galaxy or something else that you would like to know but for sure you find a link that point to the right web page. Then we have the Galaxy Documentation website that it's the documentation that is available with the Galaxy source code and is offered through this website with a really nice read the docs like interface. Then we have again, we have also the Galaxy Training Network website, it's the website where the Galaxy community collect all the training material produced by the community everywhere at any time. And then we use also some links from the Github repository of the Galaxy project and we will offer you some real example of useGalaxyU from our Github repository. Okay, so here in the Community Hub, the first page that I suggest you to visit if you are starting to think about how to create your own Galaxy server, there is the directory that lists all the platform where you can use or deploy your own Galaxy server. And here there is a list with the whole the public available server, Galaxy servers and probably you can try to look in this directory if you have any system administrator close to you to try to contact him to help you during your journey. There is also another table with some academic cloud solution and some commercial cloud solution where you can deploy your Galaxy server about the academic. Okay, I don't know all of them, but for sure I know that GNAP in Canada and Laniakia in Italy can offer you a way to have your own private instance in their cloud and you can manage and use with your group with your lab in some easy way. Otherwise, if you want to go completely virtual, there are also some Docker images or some virtual images that you can use to instantiate a virtual machine with a Galaxy server ready to go. Again, in the same page, there is another table that can give you some guidelines on how to choose the proper platform for you, there are some options here and you can try to see if your idea fit with the suggestions that are in this table or not and so you can change your mind or completely know this table, but in any case, I think it's useful to have a look at this web page. Okay, so now you know where you want to deploy your Galaxy, now you need to get the Galaxy code, so there is another page in the community hub, let's explain how to do this and some easy way, you just need to clone a GitHub repository in your server where you want and just start a bash script in a couple of minutes, you have a Galaxy server up and running, ready to be used, but this server is still are using some default values that are okay if you want just to try Galaxy but are not fine if you want to have a production environment. So you need to customize your instance and for sure you need to have a look at what are the configuration file of Galaxy and here in the documentation website, you can find an admin section, and we are in the Galaxy configurations subsection, where are described all the Galaxy configuration file, the most commonly modified files are the Galaxy.yaml, you can also find all the other files, also in this page there is a configuration option that show you all the options that this file has, you can find the format, there are some yaml file but also xml file, and also you can find here all the others files, also in this page there is a configuration option that show you all the options that this file can send to your servers to your setup and so you can find here a short description that helps you a lot when you are first looking at these files. Okay, another really useful website, it's the training, the Galaxy training network website, where are collected all the training materials produced by the worldwide Galaxy community, as you can see here is divided in different sections, there is a section for scientists, a section for developers and administrators, a section for contributors and instructors. Obviously today we are interested to the server administration section, and you can find a subsection with some core materials that you really need to check, and then another optional, I can call it an optional, sex subsection with some other material for different topics that depends what you want to do with your Galaxy server, you don't need to review all this material but if you are interested in a specific topic you can check the materials that are available there. And the first tutorial that I would like to show you, to introduce a bit, it's the one that explain you how to install Galaxy using Ansible. The Galaxy community decided a certain point to use Ansible for its deployment of recipes, and so all these tutorials are using Ansible. If you are not familiar with Ansible, there is also another tutorial that will introduce you how to use what Ansible is and how to use. Ansible is essentially a management tool, a software that allows you to configure in a proper way your server and just ask Ansible to execute a playbook. So in this tutorial, you will be guided step by step to create your own playbook to install Galaxy, but also other software that are needed to Galaxy like Postgres, SQL, or SystemD or a proxy server like Nginx. And step by step at the end you will have your playbook ready with production ready Galaxy server ready to go. That's one of the first examples that I want to show you. It's the playbook that we are using to deploy use Galaxy to TU on our infrastructure. It's available through our GitHub repository, our GitHub organization to this repository. And here we have all the details of the playbook that we are using every day, every night, to redeploy useGalaxy. If we need to modify any aspect of the configuration with Galaxy, we come here and modify this playbook or any of the variable files that we are using with this playbook, or any of the tasks or any of the rules that we are using to deploy Ansible Playbook. I just wait for Jenkins to redeploy the playbook every night. And next morning we will have the updates up and running on useGalaxy. Okay, so after using the playbook, following the tutorial, you have a production environment, a Galaxy production environment, a Galaxy installed in a production environment. And I think you can review also this page that describe in very fine detail what does it mean a production environment. What you really need to know to have your Galaxy server configured in a proper way to sustain a lot of users using it. For example, here you can find describe it how to move from the default SQLite database to a Postgres server, or how to use how to put a proxy server in front of your Galaxy server, or that it's a really important one and most important aspect from a point of view of production environment. So you can move away to run all your tools locally and instead start to use a cluster or at least a job scheduler. And this is exactly what you can find in this web page or the documentation website how to connect your Galaxy to a compute cluster. You can use a lot of different job schedulers like Turkey, PBS, or HTCondor, the one used by useGalaxy U, or Slarm, used by useGalaxy.org, or useGalaxy Australia, and several others. We have also another tutorial that describe you how to connect a Slarm cluster to your Galaxy server. In this case, the tutorial we use Slarm. And what is interesting from a point of view of this tutorial that you are updating the playbook created with the previous tutorial. So you have started with the first tutorial to create your playbook and then following the other tutorial. So you simply had some new tasks and new rules to that, to the same playbook to improve to update your Galaxy setup. And okay, now that you have your Galaxy server running, you have a cluster where you can submit your job. You want to map, to create a map to tell Galaxy how you should map the jobs to a specific destination. So you want to specify, for example, for a specific tool, how many cores should be used, how many runs should be reserved on the working nodes and other details of this kind. Yeah, you can do it easily. Galaxy have several, several ways to do this, you can do it in a static way. So it doesn't mean that you simply need to modify a configuration file where you need to put all these details. I want to use this tool with this amount of cores. It should be executed in this destination, but is a static way to describe this kind of map. But you can do also in a dynamic way. That means that all the details are refined are evaluated at a runtime. For example, because you want to use a different value if you have a group for a group of users or if your input data or certain sites or any, any other idea of this kind. And you can do it easily using the dynamic tool destination tool that Galaxy offers to you. This is just a kind of YAML file that you need to prepare with all the details of your jobs and destination. Or you can also use a Python function. So that means you can write your Python code and Galaxy will evaluate that code before start, before to send your jobs to the schedulers. And again, we have here a tutorial and a GTM website where you can find an example for all this opportunity that you have to create your own map. And as a real example here, we have the jobs dispatcher that we are using with Galaxy. It's a Python code available through our GitHub repository. And that every job that is executed on usegalaxy.eu will be managed by sorting out that it's the name of this tool. And it uses two YAML files. One YAML file to describe the destination. So here you can see we have a label. And we have a general info that describes if this destination is remote or not, which are the limits on that destination. If you can use a maximum amount of 16 core and 31 gigabyte of RAM, you can specify some environmental variables if you need it. And there are the parameters for specific parameters for the scheduler. And we have several, a lot of different destination. And there is another YAML file specific for the tool. So here you can see for that fetch tool, we are specifying that we want to execute it with one core, 0.3 gigabytes of RAM, no GPUs. We want to use the condor. We are specifying the environment temp variables. And we are adding some specific parameters for this tool to the job scheduler configuration. And so on and so far for all the tools that made it a specific configuration. And once again, it's available through our GitHub repository. So it's, we are using the gt condor and use Galaxy U, but I think you can easily modify something out to the different job scheduler. And now it's your turn. So the most of the requests that you could receive from your users, maybe, oh, I saw this workflow and I would like to run it on my instance. How can I make it? I read in this publication that I would need this tool, this specific version. Can you put it in Galaxy? So there are multiple ways to deal with tool management. The easiest and the first way I would say that you, I was doing is to use the admin webpage. So on your Galaxy instance, you have an admin tab. And here you can deal with the tool management. And this is catastrophic for reproducibility, because then you don't know what you installed what you did not install. And also it's very tedious to do. So I really encourage you to do some more automation. So just I provide the link on how to use this interface, but yeah, I would recommend to use a more automate way. So the, the other way to use it is to use FMRS. Mauro, can you switch the slides please? So FMRS is a Python script, a Python library and a bunch of scripts to come to get the list of tool from a workflow file, but also to interact with your Galaxy instance and to install tools for example. So I provide the link for the FMRS documentation, but there is also a Galaxy training material for the, for how to use FMRS and how to manage your tool installation with it. If you want even more automatization, there is a playbook that is available so you can use Ansible to run FMRS for you. So what you need is to create a YAML file with the list of all the tools you want to install and if you want different versions for the tool you can specify all the versions. I provide here the link for the GitHub repository with all the tools that I installed on usegalaxy.eu. So this is a bunch of YAML files for the different topics of biology and this is a good example with a huge one, I have to say. But this is good because they use GitHub this way if a user wants to add a new tool or change the version of a tool, it can, it can just do a pull request and then the pull requests are reviewed. And because usegalaxy.eu is highly automated, they use Jenkins to install the tools every week based on this list of tools. And there is a training material also that would explain you how to use Jenkins to automate most of the tasks you need and in particular the tool management. And so the tool in fact it's can be installed through thanks to the tool shed, which is a repository with all the tools. So there are different levels of the tools. So the first one is a wrapper. So this is a web page that you will see and with the form to set all the parameters, the input that you want. But behind this, there is all the dependency issues. A long time ago, there were tools that were in fact the dependencies for the tools. So this is, if you look at all tools, you can see that there are some package blah, blah, blah. And these were in fact just dependencies for other tools. And now Galaxy uses Conda to solve the dependencies, which is very nice, because then you can pin specific version for tools. So for when you want to install a new tool, this is super great. If you're not familiar with Conda and you want to administrate the Galaxy, I think you would need to read the slides that part of the training material that describe how Conda is used in the framework of Galaxy. And I would like to show my experience on this, which is that when you want to install new tools, this is super great and it works super nice. However, when you want to install all tools, sometimes you will face some issues. And I told you that in March I installed from scratch my instance, and it took me one day with Ansible to install Galaxy. And then it took me two days to solve the dependencies of all tools. Because for example, you want to install a map that would require another software, but the software of course upgraded and nobody thought that it would break the compatibility. So you need to manually go to the Conda environment, fix some stuff. So yeah, you need probably you may face these type of issues if you administrate a Galaxy. So it's good to know that behind the tools there is Conda. Another part of the data that you may need to update regularly is the reference data. So when you use, I would talk about genomics because this is a field I know the most but I'm quite sure that there are also some databases that are needed for other subjects. But so in genomics you need to map your data to reference genomes. For example, if you want to do something on mouse on chicken, and you have different versions of the genome and the mapper so the different tools would require you to build an index. This is very long to build an index so you try to build it once, store it and then say to Galaxy this is here so each time you need it you can go there. So there are multiple ways to deal with this local data. So, first of all, you need to use data managers which are tools that will build the reference data. So, like for the tools management, you can use the Galaxy website, I mean the, yeah, the web page of your Galaxy instance but again I would not recommend to use it. And FMRC is again your solution. So similarly for the tool managing you can use FMRC to deal with reference data. So, an idea that came from Nate from use galaxy.org super great that building the the indices is very long. So when I did my galaxy installation, it took me three days to build all the reference. And it's a bit stupid because most of people use the same genomes. So most of people would use the same reference data. And the idea of Nate was that, in fact, we should provide the references for the genome that I commonly used by the community. And then we share it in red only to anyone that want to use it. And so I shared with you the link with the interview of Nate that explained how he came to this idea. And so his idea is to use the CVMFS. So it's a file system developed by the Sun. That is a read only. But so the idea is that you have the data you put the data in one place, then you copy it to big centers everywhere in the world. And then each galaxy instance can connect to this start them one and get the data. And if one is failing, then you can use the other one. So there is a description on how is it working on the galaxy project.org. And there is also a galaxy training material that show you how to use it. And the idea was that linked to this system which is really great would come community called IDC. That would that would be in charge of updating the reference genomes, because regularly the genome is updated and it's important to keep up to date so people keep to use CVMFS. Typically people in charge of the IDC have been trapped by a lot of other things. And in fact, it has not been maintained. So that's why, for example, I did not use it because I needed the chicken genome, the zebrafish genome and all these were to old version in the CVMFS. But I'm really convinced that we need just a kick. To put it back on the track to update this CVMFS resource, which is really great. And this would be the solution for a lot of administrators for the galaxy. So, where we are, you ever saw your galaxy server up and running your cluster, you have tools, you have reference data, you are started to to running a lot of jobs, and you are starting to collect a lot of data. So your next issue will be how can I expand my storage. And I'm sure when if you are not choose the right size for your storage system at certain point you will need to update it. Galaxy has a built in data virtualization technology called Galaxy Object Store that it's a kind of abstract layer that the couple galaxy business logic from the details of the media that you are using. And in this way is it make possible to store data on a lot of different kind of media from local storage, single disk with a single file system to a cloud based solutions. You can simply plug in additional media to to your configuration without need to change anything on about except configuration file on on galaxy. And the galaxy I mean you can also set up just store to to use a single backend, or you can create nested relation between multiple back ends. Using the different data distribution methods like the hierarchy of one or the distributed one. They are quite similar from the point of view of where your data is is read, but they are different from the point of view of where your data is written. And there you are kick away you and your data are written was always in the first became available instead in the distributed mode method. There's a pseudo randomly selected criteria to choose which backend will be used it will be used. Okay, and this is the tutorial related to the object store that will describe will help you to have a galaxy instance using multiple different storage location. Using both methods, the hierarchical one and distributed one. And again, you can simply add it to your playbook, all the tasks that that you need to for your, your setup. Yeah, an example from use galaxy you this is the object store configuration XML file of use galaxy you where we have 11 different file server that we are using along the years we collected a lot of storage system and simply added to the to this to this file. This is the new file system, the new file server. And we are using distributed method. And as you can see, all this file server all those this back and this is this option, the weight, the weight option, and all of them are are using weight to zero except the one that we that we are using to to write data that this one five time that has the value one. So in this way, we are using distributed method to that allow our, that allow is galaxy you to read the whole data from all the whole file server but he's using the the current one, the one that space at the moment, just to try to write the new data. And again, this file is available through our GitHub repository so you can is there for your reference if you need to check. Okay, another aspect that could be useful for a galaxy administrator is always possible to run jobs on remote sites. So, you have your, your galaxy server you have your computing cluster locally and but you have also some computational resources on remote site. You can use another project of the galaxy community called pulsar pulsar is a Python server application that allow a galaxy instance to execute job on a remote system. Also a windows system without the need to have a shared by system between the two sites. Galaxy can send to the pulsar site all the inputs file the scripts configuration all the details all the details needed to execute the jobs. Then, pulsar there can round the job, or at least send to the local scheduler. You can ask to the local scheduler to execute the jobs and when the job is finished it can transfer back all the result to the galaxy server. And this is a completely transparent from the user point of view. Describing how to use pulsar to run jobs on remote resources. And because there isn't any five system share between the two sites. The tutorial show you how to use a rabbit in queue message to server to allow galaxy and pulsar change messages and all the details needed by by them. Okay, we are using on production pulsar use galaxy to do is using pulsar right now. We have a project called pulsar network. It's a wide job execution system distributed across several European data centers that allow to scale galaxy instances computing power over a terogeneous resources. You can see here on the right all the partners that are providing their resources to us and we we all together we we we installed on the remote data center pulsar site and nowadays some some tools of use galaxy you are running on those data center. And there is the documentation website of the pulsar network project where you can find all the details how to install and configure configure pulsar network endpoint into an open stack cloud infrastructure and how to connect it to use galaxy. You if you want to share your resources with us but I mean the same pulsar endpoint can be associated to any galaxy instance so you can use it also in your infrastructure easily. Here, in example, graph showing one of our. Activities where we presented executed around 30,000 gpus jobs in three months, distributing them along across two different remote GPU cluster, one in UK and one in Germany, and exactly we we we use pulsar for that and quite what's quite successful. And the graph give me the clue to introduce the next topic it's monitoring, because now you have your infrastructure ready, everything is working but you need to know what's going on into your infrastructure so you need a monitoring system. The monitoring stack used by the galaxy community is made using several tools like Grafana that it's an interactive visualization web app or and influx dvd that it's time series database telegraph that it's a plugin driving server agent for collecting reporting matrix, and that it's a command line utility for galaxy administrator. And here the data flow is that galaxy produce data and or jack the mean extract those data from the galaxy database, then telegraph consumes and buffers it buffers this data. And then we add it to the influx dvd database which stores the data, and then we have the graphana web app that is used to visualize the collected data. This is a tutorial that can explain to you how to set up influx dv telegraph graphana and how to add all the details to your play to to have a probably a new playbook to create your, your monitoring infrastructure. We have some panel from the front dashboard of use galaxy you that it's available in this website. I mean, you can see the first graph it's showing the number of jobs that are running on condor that are waiting the queue. We have instead here, the second one, the lord of the galaxy server, or the third one, the number of job waiting the galaxy queue to be to be sent to the job schedule. Or we can have a different dashboard showing some disk matrix of all the several servers of the use galaxy to you infrastructure. And you can easily realize that the server at that time of this screenshot at the really really bad time. In case if you're curious to know what's going on to you and to use galaxy to you. At the moment you can simply click on this link and you will have all the details about the jobs about how many users are on this galaxy or many which is the situation of all the servers that are part of the of our infrastructure. Okay. The last topic of this webinar it's about user authentication. Galaxy supports anonymous user so user that can use your galaxy instance without providing any authentication any details. But using galaxy in this way is not convenient because you are losing a lot of benefit of the galaxy future you can save your data you can save your history you can save your workflows and other things like that you can So it's better for the point of user to be able to create an account on your galaxy server and store all the details for the activities using that account and galaxy offer different methods to create a user account you can use the building mechanism of galaxy in galaxy that store local username and password in the galaxy database. Or, you can use a, you can leverage the open ID connect protocol. So in this way user can look in galaxy using an identity that is that has been created on on another infrastructure and other authentication and authorization infrastructure, like the one that elixir can can offer you. And this is exactly what is described in this page of the galaxy community community hub. Here you can find all the details how you can register your server to the elixir infrastructure and how you can update or you can modify the galaxy configuration to use that that infrastructure. Or, we have also this tutorial that describe you how galaxy can delegate authentication to an external authentication system like an app server, or, or a palm module or a proxy server like nginx or apache. And with all this method in place so you can choose which is which better fits with your, with your community with your group and simply follow the structure in these links and will be really easy for you to, to set up your galaxy server in the proper way for your user users. Okay, so if you reviewed all the materials that we provided to you and this is still some question or you struggle to find a solution to your issue. You can still use the pan galactic search web form available in the community hub at this address. Google custom engine that allow you to search across all the galaxy websites, or you can use one of the guitar channels that we have. We have galaxy admin channels where you can find a lot of galaxy administrators and they are really friendly so please be free to ask anything that you need there. So we have also a galaxy dev channels that is mostly dedicated to question about galaxy code or galaxy futures. If your question doesn't fit in many of the two previous channel you can use the general galaxy topics channel that it's free to any kind of question related to to galaxy. Before, before GCC there is a training week that has also an admin track so this could be the right moment to meet a lot of galaxy administrators so that would like to be the next administrator. And my last slides. It's about how to stay update. Okay, you have your galaxy server up and running but you need to stay updated on what's going on the galaxy community. So for sure, I suggest you strongly suggest you to participate to the annual meeting the galaxy conference meeting. So in admin training section admin training event every year and here in this page of the community hub you can find all the past GCC events with all the materials of the slides of the videos, all the PDF. And instead here, you can find the details of the admin training events. So just to take a look also to the GTM website and day by day to see all the updates that are collected there and there are also some several galaxy mailing lists and I would like to highlight the fact that there is one dedicated to public servers. And use them mainly to communicate security issue security concern, any security problem are distributed through this mailing list probably also through the Gitter channel but also through the list could be a good idea to substitute that list. I think it's all from my side. You see Clemens if you want to add something or correct me in any way. It's fine. Thank you. Thank you. There's no way I'm correcting you guys that no way. Okay. Okay. Thank you, Maro. And excuse me Lucille and that was great. We had a whole bunch of questions. Some of them have been answered by busy buddies like myself and Bjorn already. But we have three out. Yeah, we have three outstanding. Let's see I'm going to take the, let's see one from Christophe first, which is how do pulsar and slurm live together. It's quite easy. I mean, I'm not a slurm guy. I'm not using slurm. For the moment, at least, but in the pulsar network, we use it a lot. And because from the point of view of pulsar and also from the point of view of guys, it doesn't matter if you are using slurm or ht condor. It's quite easy just a matter to check in the web page. Yeah, here, just to check here, which is the job schedule that you want to use and check what you need to change in the configuration. It's easy from the point of view of galaxy pulsar. It doesn't matter if you want to use condor slurm or any other job. Thank you, Maro. Yeah, let's see. It's approaching the end of the hour we still have a couple minutes left but I'm going to go over the hour I'm going to keep asking questions until we're done or until you guys need to leave. But just to make sure I get everything done in the hour. Thank you very much for presenting. Okay, I'll say this again at the end because you know, it's double. So, thank you for presenting and thank everyone for being here and asking great questions. The recording will be available on the website. Hopefully later today. The slides are already available on the website. So, thank you very much. Okay, I'm going to move on to the next question from Peter van. I always pronounce your name wrong Van Houston, who says use galaxy EU runs in a cloud environment open stack I believe I think. Are you using a file system based object store or are you using the open stack object storage swift. Yes, use galaxy it's an hybrid machine from that point of view because we are using real physical servers for the galaxy instance so galaxy itself it's running in real physical server. So all the computing cluster it's on the cloud. So we have all the working nodes are our cloud cloud machine and the root file server at the moment. No, we are not using. If I'm, if you know, for the moment we are not using any open stack storage solution, except for the root file system of the working nodes. They are distributed through open stack but the working nodes the galaxy servers are using the same storage, the same shared storage to really really huge storage system. So open stack it's using the only to create the working nodes of our computing cluster. I'm pretty pretty satisfied to use open stack because it's quite easy to change the configuration of your clusters or your computing cluster in the easy way just and just a way just you just need to create a new image deploy into the open stack and round the script that recreate all the working nodes so we are we are happy with that. And let's see I think we are out of questions because Bjorn answered another one. I think he did maybe maybe somebody answered it. Yes. So there are a whole bunch of answered questions in the Q&A tab. I'm really curious about those two. Let's see so we are at the hour and we are out of questions so that worked out really well. I'm going to do it anyway thank you again for presenting so you're welcome. You can show exactly when will be the next GCC. Thank you. Yes, thank you. So I'm not really awake yet as as Mario has noticed. So the next GCC is coming right up that's the galaxy community conference training starts on June 28 I believe and it runs for a week. It's asynchronous it's online. It's really cheap. And the early registration deadline for GCC is Monday. No not Monday sorry June 1 which I think is Tuesday. Anyway it's June 1 whatever day that is. So the first thing coming up is June 1 registration deadline. We are still accepting poster demo abstracts and the deadline for that is June 14 I believe. And then June 25 is when all registration ends so get registered by June 25 or be left out and nobody wants that. It's still pretty cheap even, you know, after June 1 it's it's a great deal. So the first June 28 runs for a week and then we have a three day meeting running I believe the sixth through the eighth yeah the sixth through the eighth of July. Right now we are reviewing talks submissions and so we hope to have a list of accepted talks or most accepted talks posted sometime this week. So you can make an informed decision next Tuesday on the first. And then finish with the two day co-fest July 9 and 10, where we all do collaborative work and that is not just coding our co-fest are about building the community more than they are about building code. So, the whole point of that is to bring on new contributors, and that's new contributors for training documentation testing best practice workflows you name it. So, thank you for that prompt. Let's see. I think we're done. Are we done. We'll seal anything. No, it's fine. Okay. Thank you so much for being here. Thanks everyone for participating in the call. And you are all for this presentation. I think we have a lot of link in it. Yes. So go to the event page, which is on the hub, which is where you registered for it. And I could, I don't know. Yeah, go there. Okay. Because that's where the slides are and that's where the video will be linked to. Thanks Christoph the question. Yeah, and Bay is on it. God bless you. Okay. Thanks everyone. This webinar series is done. This is a great way to close it out. And we'll see you all, hopefully in the fall. The fall the northern fall. Okay, thanks everyone.