 So in this part of the tutorial, we want to explain you how to contribute tools for Galaxy. So ultimately, all tools end up in the toolshed, so where they are categorized in different categories. And then I'm just picking a random tool and then they end up here with a description, a link, potentially a link to the original repository and also revision history. So this is all managed with Macorial, maybe a remnant of the past. So most tools nowadays are sit in GitHub repositories, like for instance, the IOC repository. And from there, they are pushed by a sub command of Planimu to the toolshed. So basically, everything contribution should go through repositories like the IOC repository. So there are many repositories. So there are repositories like the dev teams repository. I can show you also lives in the Galaxy project dev team tools dev team. Then we have for instance, which is linked to you the repository of Björn Gröning, then you have for Myra Blast and so on special reports, then there is a Galaxy proteomics repo, and so on and so forth. And if you look at those, they all look quite the same. So they have a really similar setup. And maybe I should explain it to you for the IOC repo, which is maybe one of the biggest ones. So the main directory is maybe the tools directory, which contains sub folders for each tool, or maybe a set of tools. So and there is just as usual, as you know it already, the tool XML files. Then there is the test data directory. And then there is a special file, which describes the tool. So it's a short description, a link to this repository, a link to the original source of the tool owner name, and so on and so forth. So then there is a tool collections directory, which contains, let's take this for instance, sort of connected tools. If you take for instance, the sum tools repository, they all wrap the sum tools command, which has many sub commands and each of these sub commands is represented by one tool. And they live here. So also test data directory, tool XML, and so on and so forth. There is no clear distinction between tools and tool collections, also in tools, they are often found larger collections of tools. So it's not that strict. Then there is deprecated, which contains deprecated tools and packages, tool collections and also data managers. And then there is maybe the heart of everything, the .github workflows directory, which contains GitHub workflows that automatize testing and everything for this repository. The pull request workflow is responsible or will run for each pull request that you open to this repository and also if you commit new changes to a feature or the master branch. The continuous integration workflow runs the tool tests each once a week. So our tools are regularly tested in order to see if anything goes wrong. So maybe the conda package breaks or container breaks and then we are doing, then we at least will find this out and can have good quality tools. So the pull request workflow does not only test the tools. It also lends the tools. It will lend Python scripts that are contained in your tool directories and then also rscripts. And if they are merged, they will also be taken care of the deployment to the tool shed. So creating pull requests that add new tools or improve existing tools is one of the things that you can do. And we will see how this works just in a minute. And the other way, which is maybe even easier is issues. So if you find something that is not working or not working as expected, or if you see room for improvement, then just open an issue here. And then maybe somebody was experienced with the tool or just time can take this over in the future. This is also really helpful. Okay, how about pull requests. So for the tutorial, we have created a sandbox tutorial where we can play around. And this is basically a bear repository without any tools. So there's one example tool, but this doesn't do anything useful. And we can add whatever we want here. I forgot there is two more files. There is the TT skip file, which is just a text file, which should contain or can contain past to two repositories that should be ignored by the workflows and the bio container skip works the same way. But tools listed here are not tested with containers as usual, but they are tested with conda as requirement resolver. Okay, how does this work. So everything that I explained here will work the same way for IUC repository and all other repositories mentioned here. And just to remove this. So the first thing that we need to do is we need to create a fork. So a fork is complete copy of the tool repository in your GitHub name space. So just press the fork button here. And then you select where you want to create a fork. So usually this should be your namespace. Of course, you will need to create a GitHub account first. But this should be really easy. And then this is forked. So it's exactly the same content as your repository or as the original repository and now we can get a local clone of this repository. So basically download this thing. So this is done with git clone. And then we just copy paste this link to the command line and just execute this. Then we change the directory in the galaxy tools directory and we see that it is the same repository. So the next thing that we want to do. So I just go back in history here is we want to add the original repository as an additional remote. So git remote at upstream and then copy paste this link here. And then git remote minus three to verify. So we now have two remotes. So the origin remote is your fork. And the upstream remote is the original repository. And now we can create a tool here. I will just copy the telephone tool to the tools directory and add this to a new branch. So git check out speed to create a new branch. So it's nice to use branch names that are somehow speaking. And then we get at and test data and then we commit this here. Create a useful commit message. Save it and then we push this. Okay, we need to push to a new upstream branch and the output already contains the link that you need to open in order to create a full request. So what I did now is I basically added the tool to my local repository and then I pushed this to the origin repository. So this means to my fork. And now I can create a pull request from my fork to the original repository using this URL. So this opens now the open pull request page. We can add the description of the tool, then description of the pull request, add more text to the description and then we can create the pull request. So as soon as this is done, the pull request workflow is starting. So this is a series of jobs. So the first one is the setup job, which sets up some caches and determines which tools changed and then only those tools will be checked tested lint and so on and so forth. So this will need quite a bit. You can also get always get the details by clicking on the details job. So currently this yellow dot means this is currently running. And then if you click on the details, you will see the output of the tool. So what is currently happening is just that we start a fake plan anymore run on an empty tool in order to set up the caches. This can need a wire. Then we will find out the tools that are changed. We will show them here. So if you want to, you can check over here. And then we will do a so called chunking. So let's go to the overview page which you find over here, which shows you a nice representation of the complete workflow. The first one is setup job. And then we run in parallel the linting, pattern linting tool linting, our linting and the testing. And so the testing is a little bit special. So testing can run in parallel. There are many tools. So if you open a large pull request, let's say with 10 tools, many tests, and the tests might need quite a while. So we allow to split this in four so called chunks which can run in parallel. And this will just test everything a little bit faster. So the downside of this is that we need to combine the results later on, which happens then here in this combined step. So one thing to take care of is that the test tool jobs will never fail, at least not for tool testing failures. The failure will only be determined in this job here. So if tool tests failed and the combined step will fail. So and finally, there are two more jobs. There is one boilerplate job, which checks if there was any hour in the workflow. So we need one job to determine if anything was, everything was successful, if there was a failure. And this is this one. And the last one is the deployment step. Sorry for that. The deployment step, which only runs once the pull request is merged. So we can always skip back to the separate jobs. And yeah, maybe while this is running, I show you a little bit more. So what's happening in the background could also be interesting. So the workflow lives here and let's just have a look on the pull request workflow. So what's happening here is just basically, okay, we have a set of jobs defined in this YAML file. So we have to set up job here, which goes down to here, then there's the linting stop and so on and so forth. So each job consists of a set of steps. So there are simple ones, which just do basically shell scripts. Then there are special jobs which use pre-existing GitHub actions. For instance, set up Python, then we have this caching step. And then we have special job, which runs the Planimus CI action which we created, which basically wraps Planimus and this runs the discover step essentially. And this is used quite often. So it's also used in the linting. So the GitHub CI action was the lint in lint mode. And then we run this again in the test job in test mode. So this is this step here and the last one is the combine and also the deploy step use this action. So actually the heart of the workflows is the Planimus CI action, which is basically defined here, which is just a shell script that is called in the end by the GitHub action. So this for the background, for the ones that are interested in what's going on here. So this seems to be running. Then let me show you the IUC repo. So how does such a workflow pull request is then handed in the community? So let's pick a random one, a recent one. So you see that there is already some discussion. So someone, let's see, I guess this is an addition of new tools. Actually, it's changed. So some update to some tools so you can see here what was changed. So some lines were replaced. So here we added a single quotes were added and then some parameters were added tests were added. It looks quite nice. There's a new Python script. And then there is some discussion going on. So what can be done is that you ask questions to specific specific parts of the code or the change. Just for a moment. And again, sorry for this. And then you are supposed to answer to this to respond maybe somebody request changes. So let me show you how this works. So if you go to the fights change tab, then you just can click here and leave comments. Start a review at single comments. If you start a review, you can submit you need to press this button to basically finish the review and submit it. And but you can also just join the conversation by adding new comments on the bottom of the conversation page. And then the end one of the maintainers of the repository will approve the changes, which is for some repositories necessary like the IOC report that we have at least one approval by a maintainer. And of course, all checks need to pass. And then a maintainer can merge this to the main repository. So please definitely feel free to ask questions and contribute to tool repositories like this. So you can add issues, you can pull requests, you join the discussion. So usually, no, I would have to say always the community is really nice. So they are helpful. So if you have questions, for instance, regarding Git, which might be a new world to some of you, then just feel free to ask. I'm sure you will find it. So then here we have failure. So the tool linting. Failed. And if we click on the failing job, we will immediately get to the failing step. And the output says, okay, the one tool is not in the changed repositories list, which is probably because the shed yaml file is missing, which is actually the case. So what I will do is I will just copy from a pre existing shed yaml file from another tool. So I will see just take a random one. For instance, that shed yaml to the better phone repository. And then I can edit this for the phone shed yaml. So categories, you should need to choose one of the existing categories in the toolshed, or maybe sequence analysis is fine. So the description. I'm lazy. I will just copy paste from the below phone package. Let's take it from the content package filter. It's that's bad. The description. The description form. Oh, now will be IOC. Then this should be adapted to the final link to the repository, which will be in this case, like this. So here you should insert the final, the link to the original source of the software. And then we save this, and we need to commit this get commit edit, of course, tools. And now if we check our pull request, we see that the commit has been added to the pull request. So we can just continue to add changes to the branch living in our fork and they will immediately appear also in the pull request. And this will trigger again the pull request workflow. So let me just pause the video here for a moment. So since all the tests are running, and of course you can just make a break yourself, get a coffee, whatever you like, do a walk. See you in the moment. So the pull request workflow did now run completely. So we see that most of the jobs were successful, but the combined test results job failed, which we see with the red X here. So the test tool jobs in essence, failed. So now let's go for the details. And this will just output that there was an unsuccessful test and we should inspect the old tool test results artifact for the details. So what are artifacts? So the GitHub actions workflow produces some files, which are called artifacts, which are then can be downloaded by the user. So from this page, the easiest is to just click on the top here artifacts. And then you see the old tool test results. The other one is to go the other way is to go to the overview page and scroll down and also here all the artifacts are listed. So it's completely the same. So you can access it both ways. So the artifacts are always zip files for technical reason, even if they contain just text files or single files, it will always be a zip file for the moment. So and this is basically the old tool test results will contain HTML file and JSON output of planning more tests. So as usual, so we can just click this here and see the test summary and can check the failed test and we see that's the same problem with the difference in the bomb output. And you can easily fix this by adding the number of allowed differences in the test definition. So maybe that's up to you. Just fix the test and commit to the branch. Okay. Yeah, that's basically it for how to contribute changes to the tools in the tool repository or how to add a new tool. And basically for this tutorial, but we would suggest to you either to practice it with the below phone tool that we provide. Or of course, if you already are developing a tool, feel free to open pull requests at this sandbox tutorial, or if you feel like it to one of the public repositories, then you can tell us during the tutorial. And we can have a look on your pull requests. Okay, so maybe one last thing is so of course you can use the public repositories and I think it's a good thing to use the public repositories but sometimes them maybe reasons to create a new public repository. And maybe if you want to contribute tools from a category that is not yet existing, maybe a complete new domain. So currently it's the mostly bioinformatics mass spectrometry climate data is coming. And so on and so forth, or you are a working group, and you want to maintain the tools on your own. Then that's completely fine. And you have a little bit more control over the tools, then you might have in the public repositories, but usually the community is really open to contributions also from groups and anyway. So it's really easy to set up such a repository, because we have created a so called template repository, which currently lives still in my GitHub namespace but I'm quite sure that this will be moved to the galaxy namespace quite soon. So galaxy to repository template. And it's really as the name says the template and it's really easy to use in the read me we describe how to set this up it's only a few steps necessary. All you will need to click use this template. And it will ask you where to put the repository as for repository name maybe description you can decide if it's private or public, and then just click create and everything will be done. And then you need to do those adaptions here. So there are a few places where the workflows need adaptions because they have hard coded the namespace in them and also the repository name. Then there are badges here, and also there are links which are hard coded which need to be changed. And you need to set up the two secrets for the RP keys for the toolshed and test toolshed, which are needed for the deployment a step and then you may want to remove the example to So the secrets are basically set if you go to settings. And then secrets. And then you can add a new repository secrets. So toolshed RP key and test which are the key and just add the value here. And then we don't need to set the RP key in the in the repository which would be a really bad idea. And then you have your own repository with all the workflows, and so on and so forth. Okay, then thanks for your attention. Looking forward to questions during the tutorial. And, yeah, have a good time.