 Good afternoon everyone and thanks for still being here despite three intensive days of talks, discussions, workshops. I just need your attention for 45 minutes more. We can try to make it shorter. The good thing is that this is a no-code workshop or short demo and so I hope you'll stay awake. So on the schedule it's written that Westwoodson was supposed to give the talk. I'm replacing him here to cancel, unfortunately. And I'm Alexander Nguy. I'm the head of Bind Formatics at Tersen, which is a small company that develops a product that you're gonna see for the next 45 minutes. So I won't have too much about it now. The talk is about how to use Tersen to run bioconductive pipelines. And I'm just gonna first start with a general introduction, five to 10 minutes about like what is Tersen. And then it's gonna be a bit more interactive. I'm gonna show you how to start working with Tersen. We're gonna get familiar with Interface. This is a data analytics platform. And finally, the last part of the talk will be about the pipeline that we've worked together with Wes on, which is a flow cytometry pipeline because we mainly work with flow cytometry data on Tersen, but we also support and start supporting more and more pipelines around singles there on ASIC. So yeah, let's get started. So Tersen is a platform that enables scientists to perform big data analytics without having to learn programming. And we want to change the way data analysis is performed in life sciences by being faster, more flexible and encourage collaboration, both for technical and non-technical users within a single platform. And I'm gonna mention first the challenges we try to overcome with Tersen. What do the biogists and scientists in general want? They want more control over the data sets and the data sets are becoming bigger and bigger and more difficult to deal with. They want to add metadata and annotate the data sets and for example, annotate with sample information, patient information and so on. They want interactivity with the data and everyone wants standardized workflows. That is you have your data sets, you have a standard workflow that you run and you get your results. And no one wants to use many, many tools. So the best thing is when you can have a single place where you can do all your analysis. Now about us, bioinformaticians, I will assume that most of us we didn't file bioinformaticians even if that's like a full spectrum between pure biogists and pure bioinformatician. What do bioinformaticians want? They want to support scientists. They also want standardized workflows and processes, have version control systems and be able to offer innovative algorithms to their teammates. And most important thing I think would all agree, we always want to increase efficiency of our processes and especially some of us are obsessed almost with automation and we like to automate pipelines and the deployment and reuse existing code as well. So this is our big goal, might be a bit ambitious, but what we want to do at person is to empower the biogists and liberate the bioinformaticians. So let's see, let's see that way. So now if we talk more specifically about single say data because you can see for the talks we've seen over the last three days, biocodectors used to be maybe more than single say data, but I think now that 90% of the research efforts are around single say data. There are challenges associated with that, especially when you want nowadays maybe to bring together different data sources, they come from different sources, different results, different workflows. Sometimes like different ontologies, we talked about ontologies just before and sometimes you have different names for the same thing and also some coding frameworks are preferred or so for some data types and everything is in all of the place and sometimes it can be difficult to integrate this different data sources and different frameworks. So these are the challenges associated more specifically with single say data. And what does Thurston take care of? We try to provide an abstraction of the data. So structure around how the data is handled and in the platform. We have an abstraction around operators what we call an operator and sometimes an app is a piece of code like a brick in a workflow that is reusable and that would work regardless of the input data. So we abstract the operator and the computation tasks. We handle relationships between computed data. I will get back to it later. That's one of the core components of Thurston and there's a high performance visual interaction system that could be of interest to explore and interactively the data that works up to two millions of data points. And finally we have algorithms that are standardized as apps and standardized workflows. So when I talk about algorithms like we did not reimplement everything, right? And that's also why we're here because we benefit from open source community developments because there's a statistical layer with the algorithm. These are bioconducted algorithm that could be any programming language could be Python, R, MATLAB, Java, anything. Have this piece of code and Thurston is a layer or even two layers on top of these computations. There's a relational data layer that is we will build relations between inputs and outputs. You will see that later on. And there's a visualization layer on top of everything where you can interactively or play with your data and visualize even large amount of data. So about apps and the operator development process. So as I mentioned, there's this very rich statistical ecosystems built around R, Python, Java, and so on. There's, of course, bioconductor and over frameworks for bioinformatics pipelines such as NextFlow, but there are many, many other ones. And the idea of developing an operator and having like a brick developing a brick for workflow in Thurston is what we call an amplification process that that is we convert these algorithms in different languages, in different frameworks into a common language within Thurston. And so we wrap these functions, these packages. We build what we call operators with Desvam. We have like a quality process around that and we release then an app and that can be used and installed in one click, typically. Now, before switching to the demo, I'm just gonna mention the use case that I'm gonna be presenting. As I mentioned, this work has been done in collaboration with West Queensland from the University of Pennsylvania. And we developed like a flow cytometry pipeline on Thurston for novosite data. So we start from FCS file and here you have a summary of the pipeline. We'll get back to it later. That is we have these FCS files and there are some data processing and QC steps here. Each of this square would be an app, an operator in Thurston. Then we use flow-sum algorithm for clustering and then we have like a few other apps for interpretation and insight generation once this is done and then we end up with some graphs, some metrics, some tables or a PDF report for scientists or anyone in the team. And again, we're talking about bioconductor pipelines here because this is based mostly, this is a minimal version of this workflow, which is a bioconductor workflow by Novica and collaborators. I think Lucas Pepper is here. So thank you, Lucas. Take this opportunity. And yeah, this is the main source of inspiration and again, over bioconductor pipelines are a big source of inspiration for us. And that way, once we bring a pipeline to Thurston, it is version controlled, it is standardized and that is also interactive and extensible and flexible as I mentioned, these are bricks. So you can, if you don't like flow-sum, if it does not work well on your data, you just replace this block by another one and run the pipeline again. So now this is demo time, enough slides for now. So this is relatively short demo. So this is not supposed to be too interactive, but you could, if you'd like to, you could try to follow along. If you want to follow along, you can create an account and connect to the cloud version of Thurston.com. If you go to thurston.com and try it out as well, you can do that at home also. So there will be four small parts. First, I will give a general intro to the Thurston user interface. We'll see how to run the computation using an operator and then I will switch to a bigger workflow and I will show you a flow cytometry pipeline. And finally, if we have time, we're gonna talk a bit about operator development. I can show you the code and how it looks like from the developer's perspective. All right, so if I go to thurston.com here, so I'm already logged in, but you can create an account if you're, I don't see many laptops in the room, but just in case it takes just one minute. And then, so we have the structure is around projects and teams. A project will contain some data and some workflows and maybe some reports as well. And teams is a way to have a common workspace between collaborators and share data and workflows easily. So if I just create like a small project, like a new project just to show you in my team, in my own workspace, call it my project just to be original. And now I have this new window where I have an empty project basically. And the first thing I want to do is to add a data set that I will analyze. So you could, if you want to download one, we have like the crabs data set here on GitHub. You can go to the persons, github.github.github.com slash person slash crabs data set if you want to download a data set to play around with person. And we can have it in the white format, in the long format. If you don't know crabs, this is Iris, but with crabs basically. It's like one of the base R data sets. You can click on crabs long. That's one way of doing it. Row, then right click, save us, but not in French and save it on your computer. All right, this is just if you're following along, but I already have it on my computer. So I will take a new data set and you see here when we click on new data set, I have multiple options. And each of these options is an operator. It's a special kind of operators, but it's designed to import data and could be a text file like here. It's a simple text file, but as you can see, we have an operator to import some surrender output for 10x data. We have for flow cytometry, an operator to import FCS files as well, or a zip of multiple FCS files. So this is again, very flexible because this is code that we've written. This is just a wrapper around some existing functions. And here I'm just gonna load the text file. I have downloaded the crabs data set next. You see the columns. So I have different variables and different measurements for these variables. And then I have metadata, let's say all these crabs. I have an observation ID. That's a very simple data set. I think the other science in the beginning was to compare different species of crabs. And now upload it. And here it is. Now I want to analyze it. So I'll create a new workflow. And I call originally my workflow. Okay. And here I have a new page with a canvas. And this will contain my workflow. My workflow, I mean, I will have different nodes, different bricks, blocks, that I will connect in this interface. And I will add data. I can add multiple data sources. I can join them and I can run some computation tasks in there. So I can right click here and click on add or there's a button here to add a step as well. So we call these blocks steps because there are different steps in the workflow. So I can add first the table here. I find the tables that have loaded in my project. Okay. And I load the data. Now I want to visualize and maybe perform a computation on this data. So I will add another step here. And it's called a data step. And this is the core view of person. This is the place where you will interact with your data, where you will be able to query your data, to visualize it and to run a computation here. This is what we call the cross-tab view. And we sometimes refer to projections because we're gonna project data onto this cameras here. What you see on the left here is a list of factors that are contained in my dataset here. Let's say I see the variables numeric or a factor here and I can, for example, drag and drop the measurement. And you see that when I drag the measurement factor, have different things that are displayed here. I can put it in different places. I can use it as a color, as a y-axis, as a x-axis, as a column or as a row. And I'll put it in the y-axis, sorry. And now it's being displayed. So it ran a small task, a visualization task. And now it displays the data. By default, it's sorting it because it's an efficient way of dealing with the data. It does not matter for crabs when it matters for bigger datasets. And now I want to do a bit more with my data and maybe combine factors. And if I take the variable here and put it as a row, for example, row and columns will stratify my projection and create cells, what we call cells here. And you see here that I have my observations that are stratified by variable, my variable column. So I can adjust the size. Let's say now I want to look at the color. I can put it in the columns here and I will stratify my data using the color factor on my, from my input data. And so it should create one column per color. Yeah, you see blue and orange and I have stratified my data. I have graphical parameters here. I can have multiple layers. Just call if you want to add over measurements and compare them. I have points, lines, bars, and so on. We can add the next axis and so on. Let's say I can put the color here as well and it should color according to the color factor again. And so it's not just about visualizing data here. It's also a way to query the data. So you have data and this cross-tab view, this input projection is a way to query your data and prepare it in a way that will be read by the operator that will run the computation. So it's gonna be maybe more explicit once I load an operator. Now I will do a computation on this data. Let's say I want to compute the mean per cell here because I want to see like for each variable, I like to compare the mean of the blue crabs and orange crabs. So I will add an operator here that I have a few installed that I have an app library, which is a set of curated operators that we've developed. And if I search for the mean, I have a mean operator. I also have other ones. We support shiny apps as well. Let's say I'm just gonna get the mean here and I install it. So it's being displayed here. I can have some operator settings. So not for the mean, unfortunately, because it's a very simple computation, but I can have settings. And if I click on run here, it starts a computation task. And what it's gonna do is that it's gonna compute the mean per cell here because we decided in the operator code that it was going to compute the mean per cell, but it could be the mean per row, the mean per column. Like we decide when we develop an operator. So you could imagine that the same way you could do a PCA on your data, right? You have your variables, you have all your observations, then you have a PCA operator. You project it data the right way. It does the PCA and it outputs the results. So this is really something you have to think about it as a way to prepare your data, query your data, not just visualize it and run a computation that will create an output. And now it has done the computation and if I go to the computed tables here, I can see that these are a column. This is the column index, the row index here, but the most important one is the mean that we've computed per per cell here. What gets interesting is that you see I have, go back to my workflow, can rename it, compute min. I can add a new step after this one. So you see my workflow is being built here and we have links here. So we're actually making actual relations between all the data we compute and by default it displays what I just computed. So it displays the mean here and you see that you can find like all the factors, the previous one, but also the new one here. So we've computed the mean here. It's already here. Can display it as a bar plot for example. And I have the mean per cell, so per color and per viable. It made a relation to each cell. I mean like the mean has been computed per cell. So it knows, for example, that this is the mean for orange crabs of a variable BD. So now if I use another factor here, Tersen would know how to make the relation with all the factors that have not been explicitly used in the computation before. So this is one aspect of Tersen that makes it powerful is that you can imagine you have a big workflow and you cluster your cells using flow some and then you add a sample annotation. You would not have to redo flow some and so on. Now you just make a relation to this sample annotation and you can cluster and play around with the data the way you want. So this was like a very brief introduction to Tersen's user interface. I'm just gonna post here and ask if there are any questions in the room or online about Tersen in general before we move on to the flow cytometry part. Yes. This looks really cool. Can you build your own visualization functions that you can add in here? So if you had a custom visualization for a specific data type, could you add that to this workflow? Yeah, that's a good question. I think I'm just gonna skip the flow cytometry part and talk about the last part about operator development. I'm just gonna show you how it looks like, how the code looks like. Here we have the mean operator, which is an R operator. We have so Python, anything Docker. If I click here, I can see the source and you see everything is version controlled. And so that's convenient if you want to understand why something looks wrong. If I look at the code, there's a lot of things that are more around the continuous integration, code quality and so on around this too, is to manage package dependencies and you have unit tests or a lot of heavy things to compute the mean, but it's like good practices in programming. But the most important part is this is the part that is being run. And you see that we have a Tersen API. We have two packages to interact with Tersen and basically what is being run every time is that we run this, we get a Tersen context and this Tersen context allows us to interact with input projection. Then we use the, I can tell you this syntax here. From this context, we select the y-axis, the column index and the row index. And then we do things here for performance but we do a group by and a mean by cell. So this like, this is transparent, right? It's not like we have an hour, we have a blog, we can modify it, we can do anything. So anyone basically could create an operator. We have some templates and this maybe we have, I will show it later. We have the app builders guide to explain how to build your own apps but you can build anything basically. So you can compute a more complex algorithm. You can output an image, like a Biaplot, a PNG file. You can output a PDF, a table, multiple tables with different relations with input data. So it's extremely flexible. That's one of the strengths of this and maybe the learning curve might be steep maybe for developers but it's extremely powerful. As I mean, it's not that steep, right? You see it's like a few lines of code to compute the mean. Any other questions? Okay, then let's see the other cool stuff now. So I don't want to start with this workflow because I just wanted you to get familiar with the interface, right? This is a flow cytometry workflow, the site of workflow. You can find it, if you go to person.com slash explore, you can find some public projects and with examples, like if you're familiar with flow cytometry algorithms, they have tracks on force in our recent methods that have been developed. So we have public examples. And there's a BioC222 test and demo project here that you can browse. We have like a few useful links here, the reference of the data set and the site of workflow and the workflow itself is here, the end-to-end site of workflow. And what we load here is from public data set from modern miller and collaborators. It's like a set of FCS file that are uploaded as a zip. So just so we're on the same page flow cytometry, I mean, measuring the intensity of different markers and in cells. So you have like basically a big matrix of 10, 20 markers with millions of observations and each observation is a cell. And what we want to do is to cluster with cells and maybe annotate them knowing like which cell type it is and counting them and looking at the proportion and seeing if they're differentially abundant between conditions that the typical workflow in flow cytometry. And so you see that we start with the data and then we have a gather step to go from a wide format to a long format. This is just more convenient in testing. Then there are simple steps like ACNH. This is a common data transformation in flow cytometry. So here again, we project the data. We have all variables which are the channels here. We have the observations here as columns and the values here. And we just compute the ACNH of these values. This is common transformation. Then we have a step to run Flossom, which is a popular clustering algorithm in the field. And what Flossom outputs is and I think it outputs some cluster ideas you see. So we have cluster ideas, meta cluster ideas and so on. And I mentioned that we could also display some graphs. So if I look at the Flossom MST operator and I go to the computed table here, I actually output it not another table, but a graph. So you could display some relevant plots here if the Terson interface is not enough. So you can do bar plots or anything, right? Some ggplot code that works with the input production that you defined and that's it. And yeah, then the next steps are more around interpreting the results. So we have an enrichment score. We order the clusters and by enrichment score, I mean that you can see for each cluster that you've estimated in using Flossom, which marker, which markers tend to have a higher or lower than expected value. And here you see for example, that we have two groups of clusters basically that are separated by values of these markers. So for example, BC7 tend to have high values in these clusters and low values in the other clusters. So this helps interpretability. And then you can annotate your population or so as well. So these are operators that we've developed, right? But you could do anything basically as long as you have someone who can code. And yeah, maybe just to show you a bit more about the interactivity and why it matters to create these relationships between data. Here when we run the UMAP, which is a dimension reduction method, when we run the UMAP and we want to see the results, I can add a data step here. Let's say I'm gonna remove everything here. You see a UMAP, I have access to everything that has been computed before. You see that UMAP outputs two things, UMAP1 and UMAP2, which are the two first dimensions of the UMAP. So I can see the UMAP here. It looks a bit funky. But yeah, now I can, I don't know, I can color with, let's say I want to see how CD3 values on my UMAP, this is something that was not used for the UMAP computation itself, right? But the relations are here and are implicit and are created implicitly. So when I computed the UMAP, I created a relation with each observation. The coordinates of the UMAP are associated to observations. And these observations are associated to marker values. So I can reuse them and explore them interactively. So as a immunologist, I mean, I'd love to do that, because for me, it's CD something and so on. They don't mean a lot, right? But I know that if I show that to immunologists, it would directly look for a relevant marker and have a look, oh, this group, this cluster is this cell type instantly. This is magic, but. And I can even do a bit more, for example, look, if I put the channel variable here as a row, I will stratify my UMAP results for marker. And if I put this time the value of the marker as a color, I will have a very relevant visualization for immunologist. And here it takes a few seconds and we're talking about millions of data points, right? That are being displayed and handled. So we're quite happy with the performance of the visualization layer. And you see here that, yeah, that's the same one as earlier. We see, for example, like BC1 is like this, there are strong differences, like these clusters on the right have a higher expression of BC1 and clusters on the left and so on. And so you can re-interact with your results and do more than just run the pipeline. And then you get to not put table and not put report. It's more than that, you can do a bit more. Yeah, maybe last thing with this workflow mentioned, you can display data differently. You can do plot, have bar plot operator. If you're not satisfied with the looks of the testing interface, you can do ggplot. And yeah, maybe before moving to the last part, are there any questions around the flow cytometry workflow? Yes. Yeah, so I see that you have this all in one giant workflow. Is it also possible to kind of split this up and then have workflows that are connected to each other, like maybe the output of one workflow feeds into the input of the next one? Yeah, that's a very good question because that was the next part. So just like I can export like here, I can add, I have different ways of different types of steps. I can add, there's an export functionality here that will create a CSV file in my project, but I can reuse another workflow. But the other thing I wanted to show you, so here, this is actually not West's data that I've used because this is a workflow that is shared with all of you. But if I go to the workflow with actually done with West Wilson and if I go to the workflow, you see that I can also have some sub workflows here. If I open it, this is my data preprocessing workflow with an input and output. I do some automated gating here. I add some views, do some QC steps here with flow cut. And here, if I double click, I can decide to display like only a few relevant visualization like for example, singlet gating here. Let's hope this one looks good. Yeah, and I can have, I can see the results of my automated gating. So there's a way to create sub workflows like this. There's a way to create independent workflows and exporting the results. And also to share and reuse workflows, you can also create templates. And for example, a template is if I create a workflow, I have here some existing templates that are version controlled workflows basically. And this one is designed to work with, we have a flow job plugin. That's of interest, some of you. And I can load a workflow that I have templatized. And you can see here that I have my workflow. We've done sampling, flow some and return score, T-SNE, UMAP and so on. I just need to plug in new data. That's one way of sharing workflows as well. Any other questions? Yep. Is it possible to export out the code? Ah, like, yeah, having like a script of, so not at the moment, directly. So each step is transparent. So you can easily go and check the code. So I can imagine that it would be like an easy feature to implement, but it's not implemented yet. Sorry, this thing takes a minute to warm up. I just wondered about it because you could build a really quite complicated flow and it would be very convenient to sort of structure it using the GUI, but the downfall of GUIs is it's very hard to reproduce what you did sometimes without being in that exact GUI again. And as time moves on and software changes, you can lose track of the work that you did in the past, but if you could export a script, a record of what you had done, then you have something more tangible that's probably more durable over time. Yeah, yeah, that would make sense. So I could imagine it would be like to take a snapshot of the workflow at a given time because maybe after you interact with it and you update flow some to the latest version and then that's different for some reason. So yeah, it would be a useful thing. Can I add that question? Sure, we can hear you. Sweet, so the other Alex, actually I had that same issue where I needed a reproducibility in academia and the other Alex has actually built me a feature which will allow me to export my code directly from the run. So if you have a result at the end, you can export now every bit that was run forward. So you mean, sorry, then here, he's done it? Yeah, the other Alex has implemented a feature now that allows me to export the code from around here. Okay, so I guess Wes is with us. So hi, Wes. Over, yeah, very strongly with it. Yeah, the other Alex is our CTO, like some more is with most of the product actually. So yeah, it's on the way to be a shit then. All right. Yeah, three minutes left, I think I guess it's two, 12, my time. First time I give a talk about late. Yeah, I think I'm just gonna wrap everything up. Just checking if there's another thing that I wanted to show you. Yeah, then I mean, yeah, you can do more statistical stuff with actual P values and so on. Yeah, we have, we have, yeah, we have we've got some operators and over. If some are missing, we're happy to develop more. We're happy to train people to develop more. But yeah, as long as it can be written in code, we can wrap something. So how the immunologists go here and get by themselves, like for a bioinformatics perspective, it is very easy to follow. And, you know, this pipeline is very, very cool, but to make them, make biologists or immunologists like follow this path, like operating. But that makes sense. And these workflows are intimidating, but we can hide them into like sub workflows like this and just surface the relevant metrics like this and then train the immunologists to, okay, look at the gating here, does it look good to you or not? Is it too stringent? If not, you can tweak this parameter. So that's one way of doing it. We can hide some of these things and templateize them. That is, they have a given input data set and we have a template that runs everything. Or we have also the concept, I don't have time to show it off apps. And she's another layer on top of that. This is a workbook that is getting smarter, that will ask the user with a better interface and not having to project the data. What is your measurement? What is your channel factor? What is your grouping factor? What is your treatment viable? And then doesn't be smart enough to pick up and make the projection itself. So this is one way we can bring workflows to non-technical users. Again, depends on the team a lot. Well, thank you. Yeah, maybe I can just finish it like it's 2.15. All right, I'm just gonna finish with the slides and then I'll be around a bit if you want to continue discussing or if you want to hear more. If I go back to the slideshow. So yeah, operator development. We have something called Tersen Studio, which is basically including a local version of Tersen that you can run locally and with an RStudio server so that you can run locally and interact with the API. So you have RStudio, you have your local Tersen and you develop interactively. So you load the data from your local Tersen and you develop your operator and then you push it to the tab and you can install it on Tersen. So yeah, in terms of deployment, I want to enter the details. Some of you are interested. Anyway, we use Docker and Kubernetes. It's again, very flexible and is a common framework. So it scales. It can be installed in your organization if you don't want to use the cloud to upload your data. It can be run locally as well. And we can connect to external storage. So you don't have necessary to upload data. You can also connect to an SpreyBaket or anything. So, and final slide, what we're trying to do with Tersen is to hit the sweet spot where everyone's happy and we bring harmony to the lab. Data scientists can develop reusable art and prototype and deploy some new algorithms and with a flexible platform. The scientists know about this with standard as workflow that are more or less easy to use depending on how far you want to go in customization. Everything is possible into a single platform and also the ability to interact with the data. So a good thing for multiomics, everyone's talking about it, but I'm not sure who's actually doing it. And we had a good example with a keynote this morning of multiomics with good results and like this beautiful UMAP clustered with a genotype. I think I would, yeah, a dream of seeing that on Tersen. But this would be a good platform because you can join a notation, join new datasets and easily interact with them again. And we try also to make IT happy by being easy to integrate into existing systems and so on. With that, I'd like to thank our collaborators from the University of Pennsylvania, Michael Milleney, with this on the call and people at Tersen, Martin Mito, Lucas, Alexandre, Marro, and Faris. And of course, by conductor organizers and by conductor contributors because you can see we rely heavily on open source software. And yeah, if you want to hear more about Tersen, you can contact us, you'll have all the information here. If you create an account, you can book a free session with us so we can discuss how you could benefit from it or we can learn from your workflows and we'd be happy to discuss any time with you. So thank you all and happy to take any more questions. What's actually had a quick comment to add about critical stakeholders? Yeah, so hopefully I'm too loud in the room. I'll try to use indoor voice. So basically, one of the things that we can Tersen with that's been really effective is sort of meeting with stakeholders that are clinicians, what labs scientists, HG students, postdocs who don't have experience. So they can, you know, anybody can learn about conductor, we've all done it, we've all started somewhere. But as you guys know with data, if you do something kind of wrong at the beginning, pre-processing, cleaning the data, what is real, what is not, get rid of the noise, then your downstream analysis can kind of be jacked up, right? So what this allows Tersen and a lot of me to do is is take my existing pipelines that are all bioconductor packages. A lot of them were Docker and Nextflow, but Tersen supports Docker and Nextflow. So you don't have to like, you know, you can take your existing pipelines and it allows two things that happen really, really well. One, I'm no longer CC'd in as a data custodian. So if a postdoc gets data off the NovaSeq or a clinical trial is doing a bunch of flow data, I don't have to worry about their data anymore because now they can upload their data and they can start a pipeline I've already built that we already trust the pre-processing of. And then I can go in after those pre-processing steps have done, the alignment's been done, you know, the count matrix has been generated from the FASTQs. After all that's been done, I can go in and tweak and look at things. But what's great is all those mid-layer graphs along the way, because now, you know, I'm not getting an email every other morning being like, hey, Wes, can you like graph this gene compared to this gene? Hey, oh, I think gave me a new idea. I want to look at this gene now to this gene and you're not just, you know, you're a bioinformatician. You want to do statistical analysis. You want to do cool algorithms. You want to find cool stuff. You don't want to be making graphs every day for somebody, right? And this allows the stakeholders to get in there and they can play with the little things and look at the things and not bugging you for like another, another version, you know, putting extra, extra cofactors on the heat map. And so that has really allowed me to like help a lot more people with their data without taking on tons and tons of extra sort of housekeeping work. And so as a biopartition, so I'm a biopartition UPenn for those who don't know, but CCI, I center for cellular immunotherapy. So all we do is clinical trials and it's just, you know, there's lots of different people doing lots of different things and I kind of have to help them all. So it's really, really helpful for that. So if you're in a bioinformatics core, that's where you work, that might be a different environment. Maybe you don't need that. But if you are someone who's supplementing a bunch of non biopartitions, it's so useful. So thanks for the nice comment. Any other question before we close it? Nope, all right. Then we're all ready for the closing ceremony. Thank you all.