 I really want to look back on this one. So I'm going to wrap this up with what next? Talked about this a few times, sort of danced her out a bit. Unlike other technologies, flow geometry is laggy behind in some respects. So if you went out to the web today and tried to find some microdata, you would find gobs. You went to find sequence data. You would find gobs. If you went to try and find microdata, you wouldn't find so much. You might find 58 or so data sets out there that are publicly available. Why is that? Lots of different reasons. Why could that be important? There was a paper published late last year where somebody went out in mind all this microdata. And just got stuff out of the different data sets that were out there on the web. And actually found a perfectly selected drug target based on the fact that this is low. There's a signal in all these different microdata experiments, but it was down on the list. It wasn't the top hit. It was down like five or six or 12 down on every different list he was looking at. It was one particular disease. But it was in this experiment, and this experiment, and this experiment, and this experiment, and this experiment. All these publicly available data sets. And it was the only one that was common across all of them. But people were missing that because they were just doing one at a time. So he was able to put all this data together, find something, did some also cool stuff. Like he didn't actually do any experiments in self. He shipped all the mouse models out to get done by somebody else. Didn't actually have to do any kind of computational kind experiment. He didn't have a wet lab. So he just says, oh, make me a mouse model. I'll do this and do that. End up getting a really good drug target and got a great paper and hopefully a drug for a rare disease. But only because all this data was out there for microrace. And you can't do that right now if it was geometry. Even though we've been around for 30 or something more years. So why is it important that we share? Because it allows people to re-explore data sets, lets you try new hypotheses. Now that we know how to do automated analysis, we can go out and mine that data and do extra experiments on that. You have to do that by, it's required by many funding journals. Also by funders, and also because we're Canadian. And we think it's important to share. So if you're going to share data with somebody else, what do you want to share? Well, you can't just give somebody a bunch of FCS files. And we've talked about this before. Getting back to the point of annotating data is so important when doing high through experience. If you talked to Nikesh, the guy who is from Gary Nolan's lab and they wrote Citobank, he will tell you the exact same thing. That is number one problem that he's having with the people who are working with Citobank and trying to deposit their data from large studies is getting people to annotate their data so they can go back and analyze it later. And this is going to be fundamental to anything you're trying to do with high through for studies, especially when you're trying to do computational analysis. So if you want to annotate your data, what kind of stuff do you have to say about that data so you can go back and understand it later? A bunch of people got together. I was one of them. And we came up with something called the minimum information by full time experiment. Sort of following along was done with microarrays years before with Miami. So if you're putting paper into many journals, they will say if they annotate according to Miami standard, same kind of idea for my flow site standard. And it's basically trying to get a consensus of if you're going to describe a full time experiment, what do you have to describe? We're not trying to tell you where to put that information. We're not telling you how to encode that, what file format you're going to put it in. Could be in the supplemental material. Could be in the methods. Could be in the results. Just somewhere you have to say some things about that experiment. This is what people think you should write down. Learned experts in the field. There's lots of people on this paper who got together and finally agreed on this stuff. And most of that stuff kind of makes sense. If you're writing a paper or something, you kind of want to describe what your variables are, when you did the experiment, who did that experiment, how did you treat your samples, where did they come from if you went out and doing like a water sample kind of experiment. You probably want to share the FCS data files, or at least a representative sample, so somebody can actually understand that. There can be problems if you look at a paper and you see something. It's like, well, how did they get there? And it's difficult to know unless people share that data. You probably want to know at least the make and model of the instrument that was run on. You don't have to give the full description of the whole instrument, but at least that would be useful to put somewhere in the description of your study. So how can you share your data? This is a little soapbox on mine. Floor repository is one way that I know about. There's no other real public easy way to do that. Sutterbank is more for sharing data within groups. But as far as I know, this is the only way to easily publicly share your data. That's already something existing out there. As a database, I was funded by the Tri-society, so it's International Society for Advanced Mathematry. The European Society for E.C.S.Gut. European Society for Clinical Psychometry Association. And ICCS, International Clinical Psychiatry Society. And they all support that and say that this is a good idea. So what is it? It's a way to share publicly your full scientific data sets. It's primarily if you're doing a publication, but doesn't have to be. If you just have a bunch of data you want to share. We created it by extending Sutterbank. So it's a fantastic platform for doing data analysis. Sutterbank is free to use. And you can really great for using within a group of people to share data. Not so great for sharing data outside of the world. And doesn't do such a good job of annotating as we've done. We basically took that Sutterbank platform, they gave it to us as open source, and we added on to my full site components. It's free in two cents. One is free in two cents. It's one is a sense of beer. It doesn't cost anything to go grab all the source code, download it and use it within your own lab. It's also free in the sense of freedom. You can do it with the code whatever you want. It's hosted at Carnegie Mellon University. So if you want to use that, you can just go to floorpository.org right now on your computer and browse all the data sets out there. But if you want to upload, you have to have a account and you can use a Google account or Yahoo ID account. There's a really good description in current protocols and cytometry. That's a free publication on how to use it that goes through excruciating these tail of the step-by-step process for doing different kinds of things. This is what it looks like when you get in there. You can browse public data sets. You can see there's a start guide, there's information down here on how to use it. In order to do that, really just only a few simple steps. You create a new experiment, click upload your data. You create some annotation templates and then it gets kind of painful at this point because the first time it sucks because there is some stuff you have to write down in it. And this is why when you're doing high-thruber experiments, this is the most difficult part is because annotation is hard. It takes time and it's not what you want to do. You just want to get your sample in there, get it analyzed and get your paper published. The problem comes when you try and trace back six months later what actually happened. And this is what we see again and again and again we're doing competition analysis of large data sets is the people who are doing the analysis on the flow machine are thinking about getting out for coffee and they're thinking about going home again today and they're not thinking about what happens two months down the road when somebody else is here and they're trying to analyze this data in an automated fashion. So, taking the time to describe your examples and example sources, taking time to describe your experimental variables, taking time to describe your similar settings, takes time and it's pain. We try to make this as simple as possible by having Excel spreadsheets, once again, yay for Excel, because it's a really easy way to do lots of annotations. You copy, paste and drop down and make these annotation templates to make it really easy to do lots of annotation at once. You can also provide your analysis details. You can upload, for example, full job workspaces or stuff from Diva. And then you get the score based on how good you've done. We try to be intelligent, so if you upload a bunch of FCS files we can extract information outside, out from the FCS data files to help populate the annotation. You get your MyFullSite score up here. The reason we do this is because then people want to get that score higher because people stop once it gets over 50% for some reason. They feel like they've passed. We're just making how the score works up and then people say, oh, it's fine. I got over 50, but if we didn't have that, people just do minimal annotation until this is like things saying you kind of suck because you didn't get about 50. You can de-identify data. So if we're going to clinical data, before it uploads, it takes away all the patient identifiers so that doesn't get uploaded because that's important for both HIPAA and the states and generally here in terms of IRB approval. We do, we know how to do that because we wrote the FCS standard and we know all the keywords out there that usually get used based on dozens of instruments from lots of different vendors. And so we know what's safe to remove versus what's not and we strip out everything that doesn't belong there. Once stuff gets uploaded, you can see stuff that's checked off is there. If it's X'd, it's not there and you say, well, maybe I can go click on that and hit the improve to see what's actually missing. You can also do your data analysis. So this is based on stuff that comes on Gary Nolan's lab. So they use this a lot for doing a site-off analysis. Spade part isn't in here, but everything else is in there. It's not flowjo, but you can do, if you're not an expert in R, which you all are now, you can do stuff like manually get your data. And then once you want to share your data with everybody else, you can make it public or you can share it in terms of review with reviewers that gives the reviewers a secret code that they can only then view when your paper gets published. If you work with the right publishers, the publisher will let us know. If you're not, you will let us know and we can unlock that data for the rest of the world. So this is what it looks like today. We have 68 different data sets, I guess, from different publications out there, from people around the world. This is just by time, so you see most of the stuff in my lab when we're just starting out, but this goes on and on and on now from different labs. I think it's a really cool thing and since it's my workshop, I can talk about it. But sharing is important and annotation is important. And so this gives you one way to do both. Next thing that's important is visualization. So this is what it looks like when you're analyzing lots of data with lots of markers. It's really hard. You can't do this. This is from the Bendel's paper I'm doing site-toff. And so one of the problems that we have when we're doing computation analysis is how do we see what we've just done on the computer? And so we get a lot of p-values. We get, you know, we're trying to visualize all this data. How do we do that? It's a bit of an unsolved problem right now that some people have worked on. Some examples of things that people have done is looking at large data sets as this spade is one approach. Archeoptimix is another. I'm not really aware of any other ways to visualize large amounts of flow-starchy data. There's really only two out there that I'm aware of. Otherwise we're stuck looking at stuff like this. And this doesn't really scare well to the kinds of analysis you guys are gonna be doing now that your experts are not doing high throughput data analysis. Bit of an unsolved problem, but you know at least how to do one tool. And you can go look at spades in R. And so you can learn how to, you can, now that you're our experts, you can go get the spade library and play with that as well. They also have a nice cytoscape plugin as well. That makes it a bit easier to use. But like I said, you're experts now, you're not afraid of that and you can use it in R as well. I'm gonna walk you through an example here that sort of illustrates the kind of crap just to give you some idea now that you're experts on the things that can go wrong when you're doing data analysis. This is an example for my own lab and the problems that we have with visualization. And how we kind of walk through one example where we tried lots of different things. And this is sort of to give you ideas that it's not always easy and some of the solutions that we kind of play with along the ways without showing all the code behind there but just kind of getting ahead that, there's other options out there for doing the kinds of things that we spent a lot of time doing in excruciating detail today. So for this one example, we're trying to find, look at emerald minimal was a digital disease in a type of leukemia. And the hypothesis is that there's gonna be something between patients who are who are emerald negative and something between emerald positive patients. It's gonna be some kind of flow symmetry signatures that's gonna be able to tease that out. So we do a typical thing that you usually do with flow symmetry data. We transform the data using the logical. And then we thought, let's use some kind of threshold in step two to normalize it to data or do some static gating after normalization to see if there's gross differences between all the MRD positive samples and all the MRD negative samples. And then we use flow type in Archeoptimics to try and find this difference. Here's a bunch of samples in group one. Here's a bunch of samples in group two like we just talked about. Is there some cell population in there that best explains that difference? And then we went and we used the area under the curve to do that. And we found something that was great. Here's some cell population that really explains well the difference between those two groups. So we thought we were done. So, but the first question that the clinician had is, well, what is that? They always want to see the dot plots in the end, right? So we have to go, well, before we show them the dot plots, we said, okay, here's, we can do heat maps in our, like you see, like for microarray analysis. And we see here is these populations that are very highly different in terms of proportions between energy negative and energy positive. So we're feeling pretty good at that. But you want to see those populations. And so you've seen today how you can make these kind of plots in our, we showed you, you can do the gating hierarchy as you see with flow density and walk through and see, okay, in a typical sample here, lots of cells or not so many cells in the other, other cases, more cells. And we can show all the patients who are in one group and show all the patients in the other group and again and again and again, you see this difference. So we're feeling good about it, that the archeoptimics tree, where that big p-value is then shown, the gating actually kind of makes sense. This is always a good thing to check because sometimes stuff doesn't work. And you really want to do a lot of visualization and exploratory data analysis to make sure that the things, the lines that you're seeing in R and bioconductor where it spits out a p-value or it spits out some marker or it spits out something. You want to get some, you want to check on that by doing a completely separate kind of analysis to try and get to that same point to validate with one method, by another method that you haven't done something wrong. And it's a really good idea to see if you can get to the same endpoint using multiple different ways. So the first way we try to get that endpoint was using archeoptimics and flow, flow type and archeoptimics to find a cell population that was highly significant. Okay, we found that using that tool. Now can we use gating or some other tool we try to use manual gating, automated manual gating to get to that same population and sort of visualize by eye that there's this big difference in the proportions between these two groups. And it seemed to be the case. Then maybe look at p-values, look at the proportions in one group versus the proportions in the other. Looked a lot different, feeling really good. This is again using that manual gating, flow density type of type, automated and manual gating hierarchy kind of technique. But the problem was, we're ready to go show Andrew Wing our clinician with that. And then, okay, we found something and it looks at that cell population and we've had this problem several different times is, I don't know what that is. I can't make a story. There's no biology about the cell population. And this is one of the problems that we're- 45 negative, C3 negative, CDI positive, what the fuck is that? I don't know. But it's highly significant. It's reproducible. So the cool thing, so the cool is it's either good news or bad news at this point. It's like, wow, nobody's ever found this before. Why is that? Well, don't, don't. So we can, yeah. What do you call me? Yeah, what's what do you call me? But it's real. We went through all, we can see it in all the patients. There's actually population, there's some dots. Okay, so we come back to the problem. What does it mean to be a population? Is one fundamental question we've kind of teased around a lot. We can see a bunch of dots that are separated from another bunch of dots. And that's all the computer can do. Then you as a biologist have to decide is, is that a population? And then the problem is, can I put a label on that, that people can understand in terms of biology? Cause we have this very strict hierarchy of cells that start from T cells, B cells and K cells. What we know, Mario Rotor published a very interesting paper, I guess maybe four or five years ago, that the more markers you put on a sample, the more cell populations you're gonna find. It's kind of intuitive, but there's a lot of stuff out there in your blood, it seems. And we may not know all of that. It doesn't help us in this case because we're not gonna be able to get a paper in blood because we don't know what that is. So maybe it's the gorilla walking across the hospital. Yeah, maybe it is. So, but you kind of stuck at this point, right? And this is one of the problems with automated analysis, is how do you publish something that isn't traditional, that you can't say, you haven't followed to the strict hierarchy of manual gating that everybody understands. And we've run into this more times than I would like, and it's a bit of a pain in the ass. So we gotta find something else. And so we used another tool from another lab, that's nothing wrong with that, we decided maybe we can use spade. And one of the problems with spade that we ran into is you get a different minimal spanning tree every time you run it, and it can become difficult to compare lots of different samples. And so one way you can get around that is you build up a full setometry file that's representative of all of one group based on sampling. So you take a pooled frame. Thank you. So now you know how to write a pooled frame, so you can make a pooled frame of all your MRD positive samples. We made a pooled frame of all our MRD negative samples, and you use that spanning tree to build up other span trees from each representative of MRD positive negative. And the cool thing was, so this is our pooled sample, and we built MRD negative samples based on the normal tissue. So this is the normal tissue. All the MRD negatives kind of look the same. And then if you really want to, you have to go down and actually figure out what kind of cell populations these are. But we looked through all the samples that were normal for the normal tissue. MRD, sorry, all our MRD negative and all the MRD positive looked kind of the same normal tissue, which is kind of what you expect. Normal tissue looks like normal tissue in terms of MRD positive and MRD negative. What is normal tissue? We basically get it out, the non-tumor part of the sample from the fullest temperature data file. Then we looked at the tumor part of the fullest temperature data file. And what we found is all the samples look different in tumor tissue. And so what that told us was, even though we found something that seemed to be a signature when we did the RP optimics, it said this one population is statistically different between the two populations, and it's in there. Grossly, this is a very high heterogeneity in tumor tissue and MRD patients. So even though we can find one little thing, there's lots and lots of differences, but nothing really significant that's inherited among all the MRD positive versus MRD negative. And so it's really hard to find that one signal. But Spade, which we haven't talked about today is a really great way to do this big overview of all your samples at once. And we can see that all the samples look different. So this is kind of a win for us. This was between this layout, the arch layout, and that, you know, hot and kawaii or whatever you call it. Sorry, you did? This is a different spanning tree, both of them, this is the arch. There's another one we're calling it. It's a different tree shape. It's kawaii, K-A-W-A-I. That I don't know. Sorry. It's a layout, different layout. Different layout. Oh, is that how they do the rooted, if it's rooted or not rooted? Yeah, I guess. This is a little necklace shape. The other one is like a tree shape. Yeah. Yeah. It's a straight, it's a line across right at the end. Yeah. It's straight and necklace, I guess. The advice, so annotation is important. Visage is hard and tricky. And sometimes you have to think of different ways to visage data and that can be really hard. And something for you guys now that you're all experts in AR to code up new methods that will help us all out. The third problem is ease of use. So it's easier for you guys now to use AR than it was, but we're not really at the stage where we're, with Flojo, where you can just point and click stuff. And that's a real problem. That's why we have to have this workshop. But now there's, I think, maybe 200% more people using AR than there was for full scientific research than there was two days ago in the world. It's a really rare talent that you guys have. There's not that many people doing this. Yeah. Certified. You're actually the first certified people ever to do this. Yeah. Because I can really count on fingers and toes and our people in the world are doing competition analysis as opposed to technology data. There's not that, you saw the people doing flow cap. And most of those are developing the tools. They're not actually using them for any real purpose. I think something one-off kind of studies. But you're obviously here, so you see the need, but it's not easy. Aside from, AR is a really good way. And one of the reasons why you're seeing all these tools in AR, it's a really good way to get something up and running quickly. It's a really good way to prototype that something's working because it has all this infrastructure in there of statistics and math that you can test things out and do a lot of stats math development of programs with a lot of hammering around all this basic stuff. Which is why you're seeing all these tools develop in there, because they have this basic infrastructure. It's not really user-friendly in terms of normal biologists. But one thing we can do is if you develop some tool, you can put that new user from the fashion through something called gene pattern. That was developed at the Broad MIT in Harvard. Widely used for microwave analysis, proteomics and other kinds of tools. It has about 10,000 users worldwide. You can make these big pipelines, like I've shown you a couple of times, that step through data analysis. And basically, if it tells it, you can take an AR package and put it into gene pattern in a couple of hours. Once you have all that code written, they have all this infrastructure built. You just take your AR code, do a couple ins and outs, and put it up on the web for everybody to use. Really fantastic. Depends who's doing it, but. I've had no problem in all the ones that I've done. It's not, and then it makes it easy because now you have a point-and-click interface. The problem is it doesn't allow you to flex the built, but it's canned now. So you don't have as much opportunity to do all the flexible kind of things that you've learned that sometimes need to do in analysis. But we've done some work to make that as easy as possible. We put a lot of the tools that you use today in there for quality assessment, for gating and clustering. There are some options that we try to, that you drop down list on. There is some ways to put things together. But it's not great, but it's a good way to some people get started on some things. One thing that I'm really excited about is a project that we're working on that's not available widely yet. We're just finishing it up. It's called OpenCyto. It is basically an R for infrastructure that allows you to plug in gating, but not having to remember all the parameterization and where to put the brackets. And there's a double brackets or single brackets or square or two squares. It basically gets away a lot of the R coding while still making it available for you within R. And you can do things like make a gating template in Excel once again. You just give it some, for example, parameters for what is the thing that you're looking for, what the parent is, what dimensions you wanna look at, what method you wanna use to cluster, and then maybe an option. Have that in Excel spreadsheet. And then this looks very easy to you because now you know what R is. You basically read in your template, which is this. You say, well, this is where my dataset is. Point to that. And then you just run it. And it basically uses this method to gate that data. So you don't have to remember all the parameters. So I'm excited about that. Hopefully that'll make it easier. Flow workspace, we didn't talk about that today. But if you're using Flowjo, you're gonna wanna use this tool to look at what you have already in Flowjo in R. It's really fantastic. It's probably gonna make your job a lot easier down the road to make that interface and get back and forth between Flowjo and R. We're also working, and from Flowjo today, you can also call out to R and send stuff back and forth. It's kind of, I think, only available in the Enterprise version 10, but they are working very hard, the company, to work with R because they realize a lot of the tools, I don't know why they're doing it. It's kind of crazy, isn't it? No, because the stuff in R is awesome, right? And if you can, but it's not an easy way to visualize. And so I think they see the value in having a tool for visualization along with a powerful R compute engine. And so they have a way to talk back and forth. And I think that's gonna be a very exciting development. And now you guys can take advantage of that. One of the problems that we have as well is that agarins don't know what a T-cell is. And so you can do all this fancy gating. It's just gonna spit out a bunch of aminofenotypes. But we know those aminofenotypes live in some hierarchy. So starting, for example, from all lymphocytes, you have T-cells. Within the T-cell subsets, you have gamma, delta T-cells, beta T-cells, mature T-cells. Below mature T-cells, well, you can have things like regulatory T-cells. One of the things I'm working on right now is trying to get our, and get these computer languages that spit out clusters to get a better understanding, and I'm making a little bunny ear fingers, at least get better labels to the cell populations that are spitting out. And we should be able to interface with this ontology, hopefully within the next couple of months, and get more meaningful labels on the populations you're gonna get from clustering. So what's next for you guys? You can take more courses. Courses are fun. Go off for a couple of days. So there are courses that bioconductor people run on how to do bioconductor programming. They have them in great places around the world. You can go to Spain. They have them every year. They're advertised on the bioconductor website so that they will help you now that you have some understanding that it's a really fantastic thing to learn about, how to get more into how to do programming in R. Because now you know the basics, how to do four structures and loops and things like that, get more familiar with that. Now that you know at least where the flow stuff lives and what they are, you wanna be able to put these pieces together. There's lots of stuff on the bioconductor website that complements what you've seen today. There's introductions to flow cytometry. There's talks and slides about how to do gating and things like that. Suggest you to go look at those on your own time. All those are freely available along with the code and the data you run them on. There's mailing lists. So you can go to the bioconductor mailing list and people actually do use R in real life to do flow cytometry analysis. This is a couple months old when I did the last search. But at least 184 posts on this list. Do go check the post first to see if somebody's answered your question. If not, I think every post has been answered by somebody who's wrote the package. So if you get stuck, let people know and people will be happy to answer. Even people who have been answering who haven't written the packages. So I know there's a lot of people watching the list you're using R. At least they're not publishing stuff on it yet, but it's becoming more and more. So. Let's make one comment about the language and it's advisable for two reasons. It's to email just one person directly. You will get lost in your email inbox. But to email a group of people who have been using that call in resource to answer your question. And the person answering your question gets credit for answering the question in the larger worldwide community. So it's sort of win for you and win for the person answering the question. It's just. Everyone's happy. Everyone. I saw somebody post a question on FlowQ and somebody else was using FlowQ. I was like, oh my God. Someone's actually using this package and answering that question for them. It was kind of neat to see. People want to help other people. I guess that's why we're here. At least I am. I don't know about reading it. So take home message. Hopefully that you've learned that I really fundamentally believe they're at the stage now with Post-Optary Informatics that you can pour some data in and do discovery diagnosis out. The tools we have are complete. We can mine high throughput data. We can find correlations. We can find things that are being missed by manual analysis. We can get informative results, descriptive results. It's interoperable between different platforms. Stuff is going to take time still, but it's CPU time. It's not your time, which means we can do fun stuff. I like snowboarding and biking. You can do some more science. The stuff we have is based on statistics, mostly. There's math behind all we do in our, it's just a cool form of language, until we get stuff like where to draw that line sometimes and flow density gets a bit iffy. But once you tell us how to do it, we use statistics to make sure it ends up in the same spot. Like the statistics is two standard deviations or 85th percentile. That's kind of more math. It's reproducible. It can be, most of the tools are reproducible. You get the same answer twice, not necessarily always, depending on what the tool is. For example, K-means, if you don't do some forcing, it won't give you the same answer the second time, exactly. But at least you know what happened and you can kind of trace through, because you have all the code there, what went wrong. Gabbaging, Gabbajo. You know, make sure the competition's not right, everything else is gonna fail down the road. If your agents are off, everything else is gonna fail down the road. Not easy for biologists to use, but you guys are no longer biologists. You're computer scientists now. And collaborate. You're not alone. You now have a buddy beside you and six other people, five, six other people beside you, who will now be able to answer your questions. I guess after this, there's gonna be an open place. The meetings thing will live on forever. I don't know. You have your busy mail and, you know, yeah. And so, and we'd be happy to, Redina would be happy, and I would be happy too. To answer questions that you guys have, if you get stuck, all the stuff is available, or I think you put the data set up there too, maybe, Redina, yes. To answer your questions after you go home, you're not alone anymore. Lots of people involved. And anybody in the Full Star Tree community, this is one of the things I found really exciting. They're all nice people. Anybody who's developed the package, Rafael and his group are really wonderful. Greg Finak is a great resource. He's a research associate in Rafael's lab. A really great guy. Joseph from my group, if you have any questions about Full Star Tree, he's really good to know. The Kesha's group are fantastic people. They're all, everyone that I've run into really has been wonderful and their ability to be approachable and to help out others trying to use their tools. There's not a bad one in the lot. So, don't be afraid to reach out for help. And with that. So, thank you to Ryan and Redina for pulling it together for what I think is a really great offering of a fresh workshop. Thank you all for coming. I need surveys. There's one submitted, so I need some more. And everything's gonna stay posted on the wiki. You have access to the wiki for, until next year, this time when I update the wiki. Everything has been recorded and I will render those files in July and post, get those. By September, they should be up on the web so you can review stuff. But if you need something sooner, let me know. It takes about half a day to render one file. So it's not a short process, but I will get that up there for you. If there's any questions, let me know. If, yes.