 David, ki se ovo bereits glasba kakva? Jama Rukuku, naši v euhiseni v abstractu na vsega galaksije, in tudi znamo, da so nas dobro za veseljeza za vsega galaksije. Ako nazah leroomsak Galaxie, to je publica galači, Pracožimo 3 computer in storaje, po sadnih kratom je 250 GB kvota, da podeš prišlazni, kaj je ne kaj. Pozpejte nočno, 2500 nekaj, nekih velikih dokumentovati, pa informacijnečnih tulev, vzajmojno vsak referencij genu, izvajno na inaličnih. Pozpejte nočnje, nekaj prič, nekaj nekaj, 33.000 vzivnih vzivnih, nekaj 162.000 vzivnih vzivnih, je zelo vzivno na infrastruko, vzivno na 23 milijonu vzivnih vzivnih, prišlišno na različ, prišlišno na 2.8 milijonu vzivnih vzivnih. Zelo smo počkali nekaj 2 petavite in konečnjih vzivnih vzivnih vzivnih vzivnih vzivnih. Prenovno, počkali nekaj 2 pa, prišlišno na 2, prišlišno na 2, prišlišno na 2, prišli še ernih vzivnih vzivnih. Konečnji je dera je, DnB cloud, klasiče všeč virtuo, in vse database je v poskljesej SQL, svrstrakče svrstrakče s Stratum 0 in Stratum 1 server. Stratum 0 is the server that is offering to the whole CVMFS network, the singularity images. We have a joking cluster for our continuous integration activities, and the message queue by rabbitin queue, the message queue server, it is a bridge between Galaxy in Friberg and also the remote pulsar sites of the pulsar network, created into the data centers of our partners across Europe. Here are some specifications of the boxes that we are using at the moment for our services. We are using CentOS 8 as an operating system, Galaxy server is a box with 40 cores, 256 gigabyte of RAM and an SSD drive. The database server is more or less the same, except for the fact that it has two terabytes of write one SSD drive, and Jenkins more or less similar to the other one. UseGalaxy U is also part of the useGalaxy.star network that is a collaboration between public Galaxy instance across the globe to have a common set of tools, workflow and reference data. There are three main instances, so useGalaxy is one of them, then we have a useGalaxy.org US and Galaxy Australia in Australia, but the community is growing and there are other useGalaxy servers available in France, in Belgium, in Estonia, in Spain, in Italy. OK, today I would like to talk about the open infrastructure, that is the way that we are using to manage our infrastructure. We have several GitHub repository also on GitHub platform and Jenkins server, Jenkins cluster, that is care of all the continued deployment activities needed by our infrastructure to be always updated, configured as requested by us. And the next slide I will show you how we are using this together with several tools like Baker, Cloudini, Terraform Ansible and the infrastructure. OK, here we have the first repository infrastructure, where we have all the Terraform scripts about the elements of our infrastructure. Terraform is a tool for building, changing the version of the infrastructures. And as you may see here, the Terraform syntax is quite simple. Here we have a commit that is heading DNS record into our DNS server in the Amazon platform. And as soon as I committed this, as I push this commit into the repository, Jenkins relays that and started Terraform in dry mode, posting to the GitHub page the result of this test. And so we can see that Terraform would like to add exactly the DNS record that we want without changing anything and more important, without destroying anything. So, merging this pull request, Jenkins restart again Terraform, but this time lag mode and the DNS record has been added to the DNS server. In state, we have the same approach about using Ansible with our infrastructure playbook repository, where we have all the Ansible playbook managing our useGalaxy U infrastructure. Here we have a list of the commit that we push it into our repository to update useGalaxy U from the previous Galaxy version, to the current one, the 21.05, that happened about 10 days ago. And as you can see, I simply, here I committed describing that I pushed all the details of the new release into the testing sense that I updated the Ansible playbook to fix some issues with the new release and the day after, after the release, the day after, after I realized to have a stable version of my Ansible playbook, I made the same for the main server. And again, every time that we push a commit in our repository, Jenkins, not every time, but Jenkins during the night, restart the Ansible playbook for useGalaxy U. So, useGalaxy U has been redeployed every night. And that's all. I mean, this commit is describing that we changed the branch used for the Galaxy code to be deployed into the main Galaxy server. And as simple as that, working just with the GitHub interface and making some adjustment on our playbook, we updated the Galaxy server. Okay, that's for the services, but we are doing the same for the computer infrastructure. We build a machine image and I made the virtual Galaxy computer nodes, VGCN, containing a pre-configured operating system and install it's software to run all the Galaxy jobs as needed. And here we have a repository where all the codes are hosted. Every time that I made an update on this repository, Jenkins realized that and start a project using Baker, Cloud Inuit and Ansible, producing a new machine image and then this new image is uploaded to our cloud infrastructure, ready to be used to create a new virtual machine. And after this, we have the image to create a virtual machine, then we have another repository that create physically created the virtual machine using a Python code that parse a YAML file where we describe the version of the image that we want to use, where I describe the detail of the workers that need to be created as virtual machine in our cloud, the number, how much of them, which flavor, which tools need to be, which software need to be installed in the virtual machine, if we need to add a volume or if we need to enable C groups there, as simple as that. And every hour, Jenkins parse this YAML file, verifying that the virtual machine running for each class of workers aren't exactly as expected, like in the case of the worker C64. But instead here, you can see for this, two are missing and immediately Jenkins ask OpenStack to create two new workers to as requested by our configuration. Another element of the useGalaxy infrastructure is the job dispatcher naming sorting out. And here we have the GitHub repository. It's a Python code that take care of, create a runtime, the proper destination for each jobs, for each job. It's leveraging the Galaxy future, namely the dynamic destination mapping. And there are two main YAML file in sorting out to describe in the destination that we can use for each job. And the detail of how the job, how the tool need to be executed in our class. For example, the data fetch tool need to request to the competing class of one core and node with slot with one core, this amount of memory, no GPUs, this runner and can set this environment variable and so on and so forth. Another element of the useGalaxy infrastructure is TS training infrastructure service. It's a service where you can request a completely dedicated computer resource for your training through a web form and has been using more than 200 times since 2010. And you can use all the material of the Galaxy training network in our infrastructure. I would like to spend a few more seconds on this phrase, completely dedicated computer resource for your training means that we have a web server that can be used by your students to be enrolled into a role of the Galaxy server dedicated to your training. Then the TS administrator can ask our automated infrastructure to create some virtual machine for the training. The virtual machine for your training that should be started at this day and destroyed at the end of the training using this flower that it's an amount of virtual machine to create it. And then we have a special function in the sorting at code that realize that your user has training roles and then prepare dynamic destination required and needed by our job scheduler to execute the job of the train, the jobs of the training into the train nodes first. If they are full, then they are moved to the regular queue. We mentioned in automatic way also the tool that are available in the useGalaxy U. We have a repository where there is a Yamlify describing all the tools that need to be installed in the useGalaxy U. We are using families that it's a Python library for managing the Galaxy tools index data in workflows. So here we can see Crisabald that created a pull request to add a new tool. It's simple as that. Just few rows describing the name, the owner on the toolshed website and in this which section the tools need to be created. So here we can see how we can install all the files needed in the useGalaxy U. The job scheduler that we are using is HTConder. You can find here this web page all the details about HTConder, but I would like to highlight a bit what is special useful for us. In particular the fact that new workers are automatically as soon as your virtual machine is ready up and running immediately condor realize that and add to the list of the resources that can be used by your job. HTConder also has a really nice nodes failure detection routine that realize that one of the nodes is not working more and then remove that node and submit the jobs to another machine. You can easily add the GPUs resources. We have available cgroups limits there and we have some limits for cores and memory. If your job is using too much memory respect of what is being declared by the submission job then your job will be removed from running queue, put on hold and then other task take care to increment the amount of memory used by your job and re-submitted to the computing cluster. We are also ranking nodes that mean that your jobs are always delivered to the less busy nodes and we have for free another future that the nodes that the crazy nodes the nodes that are going crazy for any reason will be skipped by the follower at least they are at the bottom of the list so your job is less probably to be submitted to those nodes. HTConder provides also an expressive framework for matchmaking resources requests with resource offers so it's a condor of jobs realizing which is the right resources to be used for your jobs not of the users to know which is the proper queue that should be used to submit the job. HTConder is also a friendly and active worldwide community that organizes every year a week long workshop. I participate in some of them they are really really useful. In the end, what I show you to today is a team effort an effort of the European Galaxy team extended in the extended way not only the older members of my colleagues that are here with me working at new Galaxy infrastructure but also other colleagues distributed across Europe we are all working together hard to supporting users in different way we are fostering communities with our community outreach and resource dissemination program or providing subdomains with home welcome page and toolbox we are working on producing scientific tools and workflow there are a lot of teaching and training activities in place, look at the amazing works of the GTN network I talked about the infrastructure development but there are also a lot of other activities like Apollo Server managed completely by the home by Elena and Anthony Marco that is working on civil rules and the personal network and the event that are contributing to the climate and ecology domains and a lot a lot of other activities In the end, my take home message are listen to your users they are using your infrastructure they can help you a lot give to them an easy way to contribute in an easy way because they can do it if they found a way to do it Automate steps making better experience for people who use it as I show it to you every user can contribute to our infrastructure in an easy way using our GitHub repository it is also super easy for the system administrator to use this kind of infrastructure it is a win-win situation both of them are ready to use this kind of infrastructure in most parts of our open infrastructure I use it by other colleagues across Europe to build their own infrastructure it works, it is easy can be useful also to you have a look this is the URL and that's all from my side thanks for listening enjoy the rest of the conference