 Hello, my name is Rainie Girden and I'm a computation biologist who has been working with Galaxy Works since March of 2021. As we all know, Galaxy is a critical platform for bioinformatics analysis that has long supported workflow design, sharing, and reproducibility. With its growing community, Galaxy has seen an increase in the number of published workflows, not only on public galaxies, but also in papers. However, as the number of published workflows grows, so does the barrier of entry for users looking for the best workflow to perform a particular analysis. The recently formed workflows working group aims to address this with a set of vetted workflows. In this talk, the Galaxy workflow team aims to share our experience developing Galaxy Pro workflows, a set of Galaxy workflows that have been thoughtfully created, operationally validated, and seeded for user consumption on the Galaxy Pro platform. Each pro workflow is developed through a nine-stage process that ensures its quality. In this talk, we will cover each of the listed steps. Identifying the cornerstone tools of your workflow is the first step to take. Make a list of tools, ideally preferring those with published benchmarking data, then verify if the desired tools are available in the Galaxy tool shed. When a tool or version is not available, the latest tool version should be wrapped and then submitted to the tools intergalactic utilities commission GitHub repo. In our experience, this whole process generally takes about several weeks, and then workflow creation can begin. Here, all imported tools and or sub workflows are strung together. In our experience, sub workflows have been an important and intuitive organizational tool for our pro workflows. Their use allow for workflow and tool compartmentalization into smaller reasonable chunks and helps prevent main workflows from becoming overly complex spiderwebs. They are created in the same manner as main workflows, but are generally more limited in their scope. Next would be to identify sample data. It is extremely important to find good, recent publicly available test data for a workflow to allow for both robust and reproducible testing of the published workflow. Great resources for identifying testing data sets are the NCB ISRA and the European Nucleotide Archive. Metrics about the quality of the test data should be collected using quality control tools like VASQC, to ensure accurate and truthful benchmarking. All pro workflows must have a user manual, and this documentation must clearly include the required input files and parameters needed to run the workflow, the expected outputs from a successful run, a brief description of the workflow's logic and the tools used, and finally citations for the major tools. It is recommended that an example run of the workflow also be demonstrated in the workflow documentation. Alongside this is a quick start guide, a short document generally implemented as an interactive galaxy page that synthesizes the minimum requirements in a step-by-step list for running a pro workflow. This document includes instructions on data and reference data upload, on formatting this data, and on selection of the desired workflow along with the settings of any pertinent parameters in the workflow's tools. This document aims to provide users with a quick startup guide that allows them to set up a run of their data of interest through a workflow all in under 15 minutes. This will allow for easier testing of different pro workflows for compatibility with their desired analysis. At the end of a pro workflow, there is a report template that collects and displays data from the workflow in a standardized way. Apart from providing this data in a human-friendly and readable format, standardized reports facilitate an ecosystem of reproducibility and data accessibility. These report templates aim to help users get a high-level understanding of their results at a glance. Once a workflow is operational, datasets for quality benchmarking can be obtained. It is best practiced to evaluate analysis results versus a known truth so that the quality of the results can be assured. These datasets can be either found by identifying publicly available datasets with trustworthy published analysis results or made by simulating sequencing data from a pre-established ground truth and evaluating based on that. It is also important to conduct resource usage benchmarking on the workflow, notably in terms of runtime and CPU and memory usage. By running the workflow with different parameters and on different data and measuring their respective resource usage, this benchmarking aims to provide the user with a high-level insight on what their prospective analyses could need in terms of resources and runtime. The diverse runs during benchmarking are also a great occasion for gathering sample results from different combinations of inputs and parameters. Sample results are an important feature of the pro workflow. They provide an example of expected outputs allowing users to determine the compatibility of the workflow with their desired analysis without having to run the workflow themselves. The last steps in this process are then to ensure one, reference data availability and two, setup of automated testing. Just like the sample data, the reference data used in the published workflow must be accessible and readily available to the user for the sake of reproducibility and future workflow testing. And finally, while our confidence in our workflow is high at the moment, it is important to create automated workflow tests to ensure and facilitate its continued maintenance. These tests are generally run on a small subset of input data and they aim to determine if the component tools are installed correctly, working as expected and are producing the expected results. These testing data sets must be small enough to be distributed alongside the workflow through version control software such as on GitHub, wherever possible. Now, the Galaxyworks team would like to share the current pro workflows. Here are simple flow step representations of each workflow in a quick synopsis of the two sample RNA-seq differential expression workflow. RNA-seq reads are used to compare expression levels between samples to determine which genes are differentially expressed between tested conditions. RNA-seq reads are pre-processed by the tool FastP and then analyzed to produce quantitative estimates of transcript abundances using the tool Salmon. From there, the tool DE-seq2 compares expression levels between samples in different conditions. Finally, the differential expression tables produced here are used to generate both a filtered differential expression table of only significantly differentially expressed genes and a volcano plot illustrating the log-full change and corrected p-values. As a thank you to the Galaxy community, we'd like to contribute these three pro workflows to the IWC and the Workflows Working Group as a seed for the library to be developed there.