 Hello, my name is Alexandre Mahmoud from the Galaxy team at Johns Hopkins University and today I will be talking about the new Kubernetes powered chat ops development environment for Galaxy. What is this? At its core, beyond all the layers through which we expose the functionality, all we're doing behind the scenes is deploying transient instances of Galaxy in transient namespaces on Kubernetes. Paired with a trick I came up with about a year ago of injecting code as configuration to avoid rebuilding the Docker image, this allows developers to quickly deploy local changes on a shareable Galaxy instance running on the web. This started as a pet project at last year's post-conference hackathon between me and Mohammad Safadiyeh at the time arising undergrad senior interning at Hopkins. The system is for the most part through using existing parts of our stack, just adding layers of abstraction at a few levels to ease the use of Kubernetes regardless of one's experience in cluster or infrastructure management. Notably, I repurposed the genomics virtual lab stack, particularly Cloudman Boot and Cloudman, to deploy a scalable Kubernetes cluster, and I'm using the Galaxy home chart for the deployment of Galaxy within that cluster. With the stack as a whole already purposed for namespace isolation, the GVL was the perfect fit for acquiring the underlying infrastructure. There are currently two layers at which this development environment can be used. The first is by using the underlying bash scripts, through which one can hook up local clones of Galaxy in Galaxy home repositories to deploy and preview on a Kubernetes cluster. The second allows developers to interact with a bot in GitHub pull requests, and this chat ops feature will be the main focus of this presentation. Without further ado, I will start by demonstrating an example of such an interaction in a GitHub PR. In this example, I have a pull request in which I'm modifying back-end code for the Kubernetes runner. In order to showcase the change, I will be logging a string with a lot of new lines in the code to then see the printing after running a job on the live preview instance. I will first start by directing the bot to deploy this PR. The bot will respond back at different points. First, it will reply with reacts to your comment, signaling that it has seen and recognized the command, and at this point it has dispatched the corresponding workflow. Then, it will print a boilerplate starting comment when the deployment workflow itself has started running. Finally, after Galaxy has been deployed with the modified code, and all the handlers have passed their health checks, the bot will post a comment with the output of the Hellman store command, particularly including the link to the live instance. We can now go to Galaxy and test our change. In order to do so, I will upload a quick pasted text file and run a small text manipulation job on that data set. Given that I triggered jobs on Kubernetes, I expect to see the debugging statement introduced in the code printed in the job handler log. I now want to look at the job handler logs, which I can do in two ways. The first command, comment log, will make the bot reply with the tail of the requested log, while just log will add the entirety of the log in a just file and post a comment linking to it. As you can see, the handler did in fact print the new debugging code when running jobs. Furthermore, changes can be added to the pull request as desired, and a new invocation of the deploy command will update the running instance with the latest code on the branch. In order to demonstrate that, while also showcasing editing multiple files in the same PR, I will add a similar debugging statement to the util code. After committing the change, I will then deploy the instance and wait for the comment signaling the upgrade. Similarly, by running the two jobs in the new Galaxy instance, I can test that the new debug statements appear in the log as expected. To do so, we similarly use the log retrieving commands, which, as you can see, show the new print statement as expected, both in the commented log tail and the full justed log. Finally, when I am done previewing my changes, I can dispatch the teardown command, which will uninstall this Galaxy instance and delete the namespace, completing the lifecycle of this transient development instance of Galaxy. Dissecting the commands in a bit more detail, I will start with the deploy command, which currently supports any arbitrary parameters for the underlying Helm install command. The dash dash set is a common Helm parameter, and can be used in this case to modify many aspects of the Galaxy HelmTrad deployment, including but not limited to Galaxy configuration. Additionally, while not yet available, work is being done to accept a set of preset deployments, mimicking chunks of configurations particularly useful for testing certain integrations. Next are the just log and comment log commands. These commands accept similar parameters and defer mostly in their method of reporting the logs. Both of them need at least one argument, which should specify the type of logs that should be retrieved. Soon, the comment log will also feature a grep argument, with some key grep features allowing developers to extract particular portions of logs. Finally, the teardown command does not currently have any parameters, as it simply tears down the Galaxy instance associated with this particular pull request. In the near future, an automatic teardown dispatch will be added when a pull request is closed or merged. While not very mature, the current chat-off system has gotten to the point where it has started to be useful in facilitating some development. As such, given the support for the basic lifecycle, the plan is to release an alpha version of the system, at first for committers only, that can be used from either Galaxy or Galaxy Helm repositories in pull requests for the Galaxy app or the Galaxy Helm chart. This system will use a public Kubernetes cluster hosted on JetStream and using the GBL infrastructure as previously mentioned. A public cluster for non-committers is also in the works, but will require some strict limitations to avoid computer resource abuse. In the meantime, anyone can take advantage of the system by setting it up on their own individual folks, and I'll be more than happy to assist anyone interested with either the Kubernetes deployment or chat-off setup. And with that, thank you for listening and please find me on Remo, Gitter or email for any questions or follow-ups.