 Hello everyone and welcome to this small update from the single cell community in Galaxy. In this short talk, we will cover the background of single cell analysis as well as the existing materials and workflows and upcoming trainings. So here we see the typical analysis structure for single cell and it accommodates both custom pre-processing protocols such as cell-seq 2 used initially by the RTG as well as SDRT-seq, SMART-seq 2 and many others that require manual demultiplexing. So here, fast-q data from the batches are demultiplexed into individual cells, either as separate fast-qs or one large file with annotated feathers. And these are then matched to yield individual feature counts for each cell, which are then joined to a large table which gives the feature count matrix. For one-click mapping and quantification platforms such as 10X Genomics, the demultiplexing, mapping and union of matrices is combined into one single step where a user only needs to feed the batched fast-q data in order to receive account matrix. In the downstream analysis on the right, you have your standard filtering QC step, followed by normalization and confounder removal to boost a biological signal while reducing variation from unwanted technical and biological sources such as library size, quantification bias and cell noise. Dimension reductions such as PCA, TSNE and UMAP is then applied, followed by clustering such as K-means, hierarchical, Louvain, Leiden and others. This yield cell crosses in a nice 2D or sometimes 3D plot whose relationships and lineage with respect to one another is inferred by cluster proximity and cluster entropy. The teaching structure emulates the analysis structure with the pre-processing and downstream grouped as separate tutorials. Users first start with a video introduction to single cell as narrated by Amazon Poly, and then they can opt towards understanding barcodes and demultiplexing. All can go straight into one of the three pre-processing workflows which will produce account matrix for them. They can then choose from these four downstream trainings to analyze their data such as race ID, scampi and monocle, and expert users can even spin up a Jupyter or RStudio notebook within Galaxy and manually inspect their customs themselves, making use of the vast compute resources compared to their laptop. Our materials and tools have grown significantly over the past few years thanks to the combined efforts of several members in the community where we now have 12 different tutorials grouped into four different stages making use of over 70 different tools. This single cell or mixed workbench of tools and trainings is greatly extendable and has a field developed so will the workbench. More tools and trainings are currently being developed such as those focusing on RNA velocity analysis for predicting the future state of the cell, whereas as well as RNA deconvolution tools to confer cell states from RNA-seq data. Each of the single cell tutorials comes with a corresponding workload and we have dozens of these to accommodate many different analysis strategies. As you can see they are highly configurable with an intuitive drag and drop interface, facilitating a branching analysis style that can make use of the 8000 processing cores and 1.5 petabytes of storage as we have for the Galaxy Europe instance here in Pryport. Users never need to touch command line and this easy to use graphical interface greatly encourages users to play with their data. Users can then publish their data and customize workflows following their principles so that anyone can easily reproduce their analysis. And the existing materials can only continue to grow. Building off the core materials we've expanded into case study training materials that replicate the findings of data sets from the literature, while providing an alternative means to pre-process the data using alignment free tools such as Alibam. And this is quite powerful for the learner because this data sets travel from tutorial to tutorial and they give you messy real data that scientists have had to plod their way through so it's quite a valuable experience. We've also made good use of the live environments such as Jupiter and our studio notebooks within Galaxy. They enable us to explore single cell data sets with a greater degree of freedom and precision and this empowers users to really make the most out of their analysis by harnessing the processing power of these environments that would not be possible on their personal laptops or desktops. On top of the existing materials we have quite frankly loads more plans. But for right now we're focusing on two new suites of trainings. The first deals with RNA velocity analysis to predict future cell states of neighboring clusters. This is involving a CV load tools from the scampi suite. And this is nice because normally we will infer relationships between the different cells but this is actually using RNA maturation to prove this so it's much more powerful analysis. This week of trainings will work independently but still be complementary to the scampi trainings we already have in Galaxy so that users can generate their velocity analysis from raw fast queue data with or without prior pre-processing your downstream analysis from the other workflows. The second suite of trainings will introduce RNA deconvolution tools into Galaxy which aims to take bulk RNA seek data and infer cell states and clusters from it as if it were native single cell data. Anyone who has their bulk data set and is bulking at the price of doing single cell analysis on their cells is going to be quite excited about that. This data can then be used with the other single cell materials in Galaxy and ultimately will serve as a nice bridge between RNA and the single cell RNA trainings we have in the Galaxy training network. Thank you very much for listening.