 All right, I hit record. So I guess today we're gonna hear about image analysis, some pain points and lessons learned from Yi Sun and Jean-Karin Harish. Apologies for messing up your names. Do you wanna, I think, screen sharing is enabled. So do you wanna just take it away? Yeah, so Yi, as I think a few points on slides but really to give some context. It's, we started looking for solutions to provide access to workflows doing image analysis in an HPC and also in cloud environments as part of this European project called EOSC Life which is European Open Science Cloud for the Life Sciences. And so, yeah, as part of the project we want to unify some of the ways we run computational biology tools and workflows. And so that meant as many people were already using galaxies that meant we wanted to actually bring image analysis to Galaxy. So Yi, if you have some slides. Yes, I'm trying to share a screen. So can everybody see my screen? Yeah. Okay. So I'll just keep my talk short so we have more time for discussions. As JK just mentioned that here's some background about the EOSC Life project and what we're trying to do. So basically we're trying to move the two software tools and workflows into the cloud because we actually represent one of the research institutes within the EOSC Life project. So we represent your bio-imaging. So for us, basically we try to move all the imaging tools and image emerging workflows into the Galaxy as Galaxy is the selected technology platform for us. So just a little bit of what we are currently doing and what we have already done. So we had some good and not so good experience with Galaxy especially from the tools that develop from the view. And then we try to create a cell profiler and cell pose. So we made a cell profiler version three already in the Galaxy U public instance. And then we're currently working on making version four available. We have some problems and which I will explain a little bit more detail in the next slides. Also similar experience with cell pose. So we spend some time trying to create a wrapper for cell pose, make it available and then we didn't make it. So we instead create a container and then put it in a biocontainer. So the biocontainer is already in the repository but the cell pose Galaxy PR is remain unsolved. So I will use this to software to explain our experiences. So firstly, for example with the cell profiler version four because version three and version four is quite different. So and then the cell profiler itself has about more than 19 modules. So for the version three we make about 19 modules available in a Galaxy. So some workflows may not be compatible between versions. So the first thing with the cell profiler version four tools is that based on the feedback from the PR that there's some Galaxy security issues which prevent us creating the tool. So and then we tested everything locally and then we got a green lights for everything but when we plan to deploy it we actually get some errors from the CLI testings. Currently we're not sure how to resolve it. So and the next one is about the pinpoints about the channel restrictions. I think from the technical side and then it's all up to the admin to enable these different channels. But for us, so when we're creating the cell posts it's a contact recipes. So when we're creating the cell posts the recipe itself it depends on another one. So we have to create the dependency first. And then because the dependency is not from the Galaxy approved channel but it's already available in somewhere else. So basically what we do, we just copy that thing and then move it to the Galaxy allowed channels and then adding some metadata and then claim ourselves as a maintainer. So this is what we do. And then the third one is like because we're working a lot with macroscope images and then some of the formats it just not, we're not able to view it within Galaxy. So I mean our questions for the first two is quite similar. First two questions quite similar. So whether that is possible to specify the contact channels in the tools XML wrapper instead of admin doing that. So because if the admin don't allow that basically we would do a lot of copy and paste. So but if this can be specified from the developers and then they'll be a lot useful. So for example, for PyTorch and media and then the recipes in this channels are quite reliable but because for instance Galaxy EU don't allow this. So then we need to move this. And then the third question is and the third and fourth is the same. So in case that we have an issue like the cell profile version four which we cannot proceed anymore. And then should we just switch to containers because not every HPC support Docker containers. So if you go with that and then how do we make a tool around that Docker because it's not recommended best practice for Galaxy. So if we create the tools around containers are we allowed to just install anything in the container? So and another question is that so because Galaxy and me restrict the channels if the thing's not in the channel basically we can create a container and then to work around that to install a condo within the container and to use whatever channel we want. So is this the right way to go ahead? That's the second question. And the third one is, so if containers are allowed in the Galaxy are there any like container repositories we must use? So our container is one maybe Docker hard and create IO and then what about singularities? So where do we deposit our container recipes if we have a singularity one? So this one they keep debugging or go with container is quite as I already explained. So in case the problem like that, so which way to go? And the last one is the dependency. So this is mainly about condo recipes. So as I understand we should not pin the software version in condo recipes as little as possible but we found what is more useful is to try to pin every software in the specific version so we can always recreate the same environment. One example is still with the cell profiler because when we create a cell profiler recipe it was still version three and then later the bioconda bot just automatically update to version four. And then now if you create something based on the version four recipe it will not work because one of the dependencies has a more advanced versions and non-pile. It uses the latest one but I think the cell profile for only work with the older one. So it seems like this working things is quite complicated to solve. Because this is the last slide. So it's based on the questions we would think or what would be nice to have or maybe from the Galaxy developers they can implement something by default. So as a true developer we don't have to worry a lot. So the first one is the channel stuff. So the second one, so I can understand the Docker is not recommended away but still there will be cases that we need to create a tour around Docker whether we can have more examples. So now what we're doing is like we want to search example we go to the development document try to search the or go to the GitHub try to search the keyword and then find the recipe and then try to replicate the same thing. So and the third one is they are S3 bucket from Galaxy. So what we need is, so now if you want to run something based on which basically you need to upload the scene to the server. So is there a way to access S3 bucket directly from Galaxy without writing the files to the Galaxy disk? And the default container registers as I explained in the previous slide. So which one? So Docker have all similarity repositories which we should go for use. So the last thing is about viewing the images. So some country as we understand one viewer image viewer issue associated with certain file formats. So what about a viewer that does not depend in file format? For example, a viewer to view all the images supported by bio formats. Yeah, that's all about it. So I guess we could start out with this question. Yeah, I just want to maybe bring you a few more details. So many of these tools like cell posts for those who don't know are actually deep learning based and so a number of the ones we are working with are like this or some of them even a bit more complicated. And so the dependencies and version problems are I think more pronounced with those type of tools. And that's one thing. And again, we're having a discussion with Jeremy in the chat about the image viewing. So what our users want is actually when they click on the small eye icon in the history to be able to view the image directly and not having to go and open an application separately for it like going to the interactive environment for example, or having to plug in some image viewer directly. So yeah, so basically, and so one of the issues as if we understood that correctly is that for images we need to associate each image format with basically a tool. So it means creating 250 wrappers for the 250 image file formats that we're using. And we were thinking maybe if there was a way of having something that reads at least some of the most common image file formats without having to specify each one as a special data type. So maybe by basically having bio formats as an intermediate or maybe so something else. So I don't know about VITS, but we're definitely going to have a look. Seeing as some of those images that we're looking at are highly multidimensional. So I don't know much about VITS. So don't know if it's going to work on let's say 5D images. So and also, but basically, I mean, we have all these questions. We are kind of new here to do this. So maybe we have unframed things correctly or misunderstood something. So feel free to enlighten us. Yeah, I mean, thanks for the presentation. I think there were some good points in there. I wanted to point out and I'm sorry if I missed this, but there are some channels where you can ask to development related questions. And so I had a look at the pull request and it's something that is, I mean, it's, it's trivial to fix this problem that I have. But, you know, if we don't see it, we can do anything about it. I mean, we've been talking to, to be on greening quite, quite a bit. But there's just one person, but there's a much larger. Yeah, but also, I mean, occasionally on the, on the channel, but I think some of these issues were basically not very satisfactory to us. I mean, he has been mostly dealing with this, but I think, for example, there is the main, there is the main, the main issue of all basically building these tools with this dependency L basically where we would need to basically rebuild some content packages for dependencies instead of reusing another channel that already has them. I mean, this is a good point. And I think when we're talking about the, the, the, the, the, the, the, the main issue point. And I think we're moving. Yeah, I don't know. I shouldn't like personally, I would say I, I completely agree. And, you know, it was a good idea to not pin conda packages and to allow restrict the set of channels, because if you go with an unpin version of top level dependencies over five or six channels, you can never be able to install anything. And this is, I think where containers are shining and singularity support is more common on HPCs these days. And singularity can just take Docker images, right. So I think that's a, that's a good point. And maybe we need to elevate the status of containers. A bit more in the chat, I linked the link to how you specify container in a tool dependent in a tool. So I think like, you know, I mean, we will not take this up in the IOC channel, for instance, because we aim that anyone can install it. Also, if they don't have singularity or Docker available, but it should be entirely feasible for you guys to actually host your own tools and decide that if you want to install this, you need Docker. I think that's perfectly reasonable way forward. Yeah, so that's basically kind of the question where we, we kind of wanted to release things, but not necessarily always as a conduct package, especially when it takes a lot of time. So at least from the, from our side, what happens is that those tools are wrapped or going to be mostly wrapped now by people who do image analysis as a service. So their primary role is not to actually wrap those things, but to help users run them. So what they do is basically if it takes too long, they give up. And so we have, for example, self pose is running perfectly fine in our own galaxy instance, but it's not in a state where we can release it on the tool shape. And that's basically kind of to illustrate the issue. So if we can basically release our container, the one that works for us somewhere in one of the official galaxy things, like the tool shed, I think that would, would also help. So that people, I mean, I really don't think there's a problem if you just want to publish tools that depend on containers. As I said, like in the main channels, we're not going to do this, but we also want to have a thriving ecosystem. And I agree like it's stupid to, to work on dependencies for tools that maybe you didn't even develop. Yeah. Completely agree. Yeah. So there's no technical limitation, right? It's just, you can publish, you can just annotate the tool with the container and that container can, you don't need to worry about repository, right? That container can come from basically anywhere. And then publish it to the tool shed yourself and it, it should just work for people who have Docker or Singularity enabled. Okay. So maybe it's a bit of a misunderstanding on our part because we're told like it should be conduct packages. Yeah. So for Bjorn, I mean, because you, you attempted to give it to Bjorn's repository, right? And so he, he wants his tools. I mean to have kind of dependencies also, right? But that's not a, that's not a framework limitation, right? That's just, that's, that's the kind of tools he wants to have in that repository. And it's a sort of a social, a social thing, right? A best practices thing. So for the, for the Singularity images or also Docker images. Is there the recommended registry where to put those images at the moment? I mean, if you are going to put explicit containers, no, you can put anything you want. If you put it on KIO, you need to include the KIO, just like you would when you do Docker run. Yeah. And I'm asking because with the conda, I guess it's quite safe to put dependencies there and anyone can install it, even in, I guess now five years time. For the images, if they go away because we host them on the roll, on our own registry, this might become a real issue. I mean, but it's, you know, we could decide the same way that they're not going to host the versions. So we do run backups of all the bio containers. So that's one advantage. So if you submit your container to bio containers, we do have backups. And what about this CPM? FS. Yes. So because you have a lot of those bio containers there, right? Yeah. But you cannot submit something directly there, I guess, because it's not a proper registry or, or is that. Yeah, this is what you're talking about. So this all happens via the bio containers project. And they have multiple repositories. So one starts out with. Condor dependencies. As they are in the best practice channels. So that will not really work for the case where you want to use them from a different channel. But then bio containers has other initiatives where you can just submit. I will link it here. You can just build Docker files, for instance, right? So put that in the chat as well. So this is also how we make sure, I mean, this, this is the big advantage for us if we say, please use conda packages from. The corner forge. Or the defaults channel. Or bio corner. Then we generate the multi package container and we can say, you know, as long as the tool is there, you can install it via conda and you can install it here. And then you can use it as a Docker. Or singularity. But if you only create like the Docker image, then that leaves out the portion of the administrators that can contain us. I mean, that's, that's the one limitation. That's inherent. If you go with containers. I also forgot what there was one point that I think it was on the slide about an account that changing the terms of. An account that you can use as a container for the default channel. I mean, it's probably not a problem for many of you because you work in universities. But we've had issues in the past where, for example, EMBL wasn't recognized as an educational institution. And again, in part, for example, we have partners of the EOS Clive project, our so-called research infrastructures. So they are definitely not educational institutions, but they provide value services and there would be some of them using those things. So it's unclear now to us and that's been pointed out to me that maybe we would not be, we shouldn't be using the default an account that channel. We're basically not, right? So if you have fewer packages in one of the best practice ones, it's usually a problem. I mean, it's usually an indication something went wrong if we fetch a package from the default channel. Most of the things are in Bioconda or Condaforge. Yeah. But it's just because it's been, I think it's been used by default in a number of things and including probably some of the ways we've put our conda recipes. So do you think that we should basically remove it? I'm not familiar with the license restriction terms of service changes. So I can't tell you this. What I can tell you is that this shouldn't be a problem for the other conda channels. And it's just, you know, an HTTP server. Anyone can run a conda channel if they like to. Yeah. No, I mean, yeah, the other channels are fine. As I understand the problem is the default one. And I think there are a number of packages there as dependent that we use as dependencies because we didn't want to actually report them to some others. Yeah. I mean, that's why we're recommending that you install things from Condaforge because there's also, well, at least for Bioconda, we should only be accepting free open source software licenses. I think there are some exceptions, but I mean, that's another thing that, you know, if an administrator activates just these channels on their server, then there is a pretty good chance that they will not be, you know, doing some accidental license infringements. Another thing I want to ask, so like what we did for the cell post biocontainers. So we build a container basically would do install conda in the container so that we can use any channel in the container. So that, how do we avoid to accidentally use some, like software channels, which we are not allowed, but it's in the container and then it's already in the Bioconda repository, but we use third party channels in the container. So, which is not allowed in some of the Galaxy instance. So, I didn't understand the question. So, because cell posts, we try, firstly we try to create a recipe for cell posts, and then it depends on another package, which is not in the bioconda folder or bioconda, but we create a container and in the container we install a conda using the third party channel. So, which has the dependencies in it. So, and then we deposit this in the biocontainer. So, we still use the admin not allowed channel in the container, but we can build a tool based on this container. So, basically we work around the channel restrictions. That should be fine as long as you don't put there something that's not pre open source. Yeah. And that's biocontainers problem. But it could limit the willingness of certain servers to obviously make that tool available if it's not available in the bioconda. Yeah. That's basically why we're discussing these container things is because we have trouble releasing things in conda. Yeah, presumably your end desire might be to get it on usegalaxy.eu or usegalaxy.org for example, I would think. Yeah, in particular for teaching purposes. But we also know of a few others including the one at Embal which currently doesn't actually take containers. I mean, we have worked around because yellow was here is managing it. But otherwise, yes. Yes, I mean, this is so I'm a little worried about what change has happened to the default channel to as far as licensing is concerned. I haven't heard anything about this yet. So I don't know. It happened a year ago actually. Sometimes I think, yeah. About a year ago. Let me see if I find the link. Yeah, if you have a link, that would be great just. If Marius didn't know and I didn't know, I'm sure there's a lot of people with it. Yeah, I mean, I only heard about it recently. I mean, I don't know. I don't know. I don't know. I don't know. So that it's free for everyone. Right. Like, like you're saying, the problem, you know, why, why, why we're so strict as we want to be free for, you know, commercial just, and just the same as academic and nonprofit. If there's some change to conda that makes it incompatible with that, we'll need to really investigate, I think. I mean, I keep on the discussion. I'll see if I find it quickly. Otherwise I'll have to email it. So what about the Anaconda distribution itself, which we're not using at all. We're using the Anaconda. Okay, no, but there was something that involved the default channel as part of it. Yes. The Anaconda distribution packages a lot of things from the defaults channel. Okay. I mean, I, I'm not really deep into these things. It's something I was pointed to. Recently, and someone said that it could be a problem. And so I thought maybe you already knew about that and you had an opinion or a way of dealing with it, but maybe it's not actually a problem. I think these are the, the terms of, of services. And I think it has to be about some of the interpretation of, basically this is about their definition of commercial activities and whether you're a nonprofit research institution. And so, yeah. It just that in other cases, very similar we had, we had issues that we were not considered at least in that case, that was is educational. And for the research, the European research infrastructures, that's also unclear where they fall in that spectrum. Yeah, anyways, this is definitely something to look into deeper. But there is a difference between anaconda, conda, and then, for example, mini conda, which is a distribution that contains conda, and allows you to use the packages, right? Yes, but but someone said that because the default channel is somehow anaconda or something is covered by this. I mean, I thought that maybe actually someone could enlighten me here, because you might have been around this more than me, but it's fine. If nobody knows it's and I mean, I would encourage I mean, like when no one is a lawyer, but if you look for this, you'll find also some threats where the CEO of anaconda responds and says, Hey, this is only applies to the anaconda distribution, which is not what we're using. And anyway, this also is more people that I mean, it's a different topic. I don't think it's really a mission. Yeah. So yeah, I love to go anyways, at at seven here local time anyways. But so before we run out of time, I want to to discuss a little bit the image viewer. So to clarify what we are after or our users are asking us is that when you click in the icon in the history, you should be able to see the corresponding image. So they should give you access to some image viewer. And that you shouldn't I mean, yeah, I mean, the way we've looked at it is that it needs the date it needs to define a data type based on the file format. Like I understand it. So if you want to open a tiff image, you need to define to associate an image viewer with tiff. If you want to open, I don't know, an HDF five type of image, you need to define also this as the data type. Is that correct? Or can we have a generic image data type, and then have a viewer that has opens those without having to generate many types? I think that's, I don't know this. Does anyone want to take that question? I mean, what Galaxy does with the AI can is mostly to display what a browser can display. So tiff files should be fine. SVGs are not displayed because they could be finishes. But you can turn that on and off tiff tiff are not okay. I mean, unless there has been a recent change to, to browsers, but they don't they don't know how to open tif files by default. So PNGs and JPEGs are okay. But at least when when we tried, we couldn't for tif files, but then they are like many other image file formats. So what so the idea was to have an image data type, and that it would be read by bio formats, which basically is a universal converter, and then displayed into a generic image viewer that we can pull from various places. Yes, that would be based on JavaScript, basically. Yes, I think that's that's a good idea that is probably the direction that we're heading in but currently sort of wrapping up the work on the new history. And I think then it should be relatively straightforward to augment to take the visualizations out from the special menu, and say provide also, you know, with the eye icon that you can access visualizations this way. I think that's maybe what would help you. Well, I mean, definitely. I mean, so but so the what what's the entirely clear to me is whether there is or how it works, basically, because as I understand it, you have you have to define a data type to associate with a viewer. So is this like an external website that can take like a URL, then it will view like display the images or no, we were thinking of something integrated in in Galaxy where so you click on the icon and if the data is an image, the it will basically be viewed in a in a JavaScript viewer. And the format shouldn't matter because it would should be dealt with transparently underneath. Yeah, so you could pretty much do this now if you wanted by extending the like an image data type, like you said, and have the display method for that abstract data type that's implemented with all these other the actual extensions just show a visualization instead of try to show the image in the browser. There are exists. I was trying to find an example, but I'm on my phone here, unfortunately, we do this sort of with BAM files, right? Yeah, the MSN files, you click on the eye. Well, that's on the back end, right? We wouldn't want to do this. I mean, it's, it's because Galaxy comes from the sequencing world that we were okay with this. But yeah, I mean, something similar. Yes. Yeah. Yeah. So yes, but so basically, at least if we run bio formats, so this is the converter, it would have to run on the bike. Yeah, I'm not familiar with bio formats. It's basically a universal converter. So basically, the bio formats, people have reverse engineered many image file formats. And so this thing can read basically people are putting links in the in the chat. Most image type data types or image file formats that are out there. So I'm not entirely sure. I mean, we don't actually want necessarily all of those, but at least the most common ones that people might have around. And and for the viewer basically want something that at least is able to deal with the at least five D five dimensional images. So XYZ time and channels. So bio formats is a Java eight software. Yeah, so the the primary libraries is Java and then it's been wrapped in many of the languages. Okay, so it has bindings, for example, for Python, and so forth. Yes, there's Python, there's our MATLAB proposal, I think. Yeah, I mean, if there is if there are bindings to Python, you could do this today, just writing a custom display method. And you could serve if you I'm still not sure exactly where where the line is between what you want to do on the back end and what you want to do on the front end is visualization. Go ahead. Well, on the back end, I guess that would be reading the image and with bio formats converting it into something that can be displayed on the web. So for something I've done, let's say for a shiny app is that I read the images with with bio formats, convert those into a bunch of of pink for each slice or each time point. And then I have a JavaScript viewer that actually takes those and displays them. Got it. But that's a little bit cumbersome, but maybe that's the way to go for now. If you're not worried about cumbersomeness, we could define a converter tool for bio formats that takes in these image files and creates an actual bio formats output file that the user could then click on a link to view. Yeah, that's a good idea. That's one way to do it. Yeah, but that would mean that the users would have to put that in their pipeline, basically. Well, if it's a built-in converter, then you wouldn't necessarily have to show that. You could do it automatically, right? Yes, yeah, we could, in the just, yeah, so I mean, in the display application, we could pre-route to either a locally hosted JavaScript viewer or a JavaScript viewer builds out someplace else, but actually build the conversion into the actual display application. So it runs natively as a job, right? Yeah, I mean, there's a couple of different ways to do it. I mean, the smoothest way, obviously, if you just want the eye icon to work, you know, you'll have to do something on the back end, too. I mean, this is basically, I think, a major usability request we've had from people doing image work. Yeah, Marius, I mean, what Marius outlined in terms of, you know, once the new history is done, rewriting the eye icon to sort of be a visualize this in any number of ways, but in is is definitely in the next couple of quarters on the roadmap. But if you wanted to do this today, it's, it's not urgent, but yes, within, maybe let's say within a year, we would really like to be able to have something like that. So what would be the suggestion here? Should we wait for the new history with the new eye icon? Is that because then also, if you're going to change it, I think it probably is a waste of time to do something that may not work in newer versions. Well, I mean, we would have a migration path for for the existing stuff, right? So I wouldn't, I wouldn't worry too much about that. It really just depends on how quickly you want. Well, I mean, the new, the new feature sounds actually quite, quite easier, maybe for us. I don't know. So JK also to, to clarify, so the, the tools, like for example, cell posts or other tools that would actually output an image, they would, the wrapper would actually make them already in a bio format, or would they still spit out a normal image format and then you want Galaxy, so to say, to, to figure out what to do with it? Yes, because of all the tools already made bio formats and then you can. No, but bio format is not a format by itself. It's just a library that understands hundreds of different file formats and can, can to a limit some extent convert between them. Typically it's used for reading all these different formats and outputting a few different ones, most, most often used for all the teeth output, but can output slightly different. So it's a kind of a universal file form at converter for images. So you mentioned like 250 different like image types. Is that maybe something, you know, that we can support as up loads into Galaxy, but then once, I mean, I guess you will, as a user, you wouldn't upload an image to Galaxy just to see it, right? You would do some transformation on it. And so is it maybe a reasonable thing that at each step of your pipeline would produce something that is, you know, relatively standard conformant or is that something that would lose too much information? Well, yeah, there are multiple situations here. So in the most recent cases, what people wanted to do is look at the segmentation mask. So you produce the, you will get some, let's say, cells as input and you want to extract the nuclei of the cells. So the segmentation mask is basically the, the nuclei that, that have been identified. And so people want just to have a quick look, like how good has it been? Because they may need to change some of the parameters if they use already a pre-trained model, let's say for the deep learning ones, like cell pose. So they need to look quickly at, let's say, five, 10 images and then at the next step of the pipeline if there's been some other transformation also. And so different tools will also output different, different formats, but in general, yeah, we would want to do have something that works for, for everybody. But we can start by restricting to the most common formats that's, but it's just that bio-format that's a lot. So I was going to say, I mean, so like you mentioned TIFF specifically, or at this point are there really a lot of different tools that you're working on that create things other than TIFF? Cause if it's just TIFF, we should just add a JavaScript TIFF viewer that just works with that display. Yeah, I was going to say there is OpenSeaDragon in Galaxy now that you can view TIFFs in. Yeah, so I had a look at OpenSeaDragon a while back. One of the issues is that it doesn't do multi-pages TIFFs or at least it's not easy to navigate through a stack. I, at least I didn't see that functionality. It was also the test Z being mentioned in the chat. Is that something that... So this one I don't know. So maybe Jeremy can tell us. Sure, I can chime in. From a high level perspective, what I see is the need for three different things. Ideally, one is we need to have bio-formats wrapped into Galaxy. And this may already be done to be honest. I'm not sure I should talk to my group and the orange group in particular because the reason I say this is the test is built. Well, one of the components of the test is something called VIV, which is this multiplex image viewer. This viewer is tuned to OME tests in particular. So for whatever reason that goes beyond what I've been involved with so far, OME TIFs have become the standard for multiplex tissue imaging data sets in the United States. And really across some large tissue Atlas projects, including human cell Atlas, HubMap and the human tumor Atlas are the three that I'm involved in that do this. So long story short, everything moves to an OME TIF eventually, sometimes using bio-formats. And then we use viewers that are based around OME TIF, Napari on the desktop and on the web, VIV or VITAS. And so I'm happy to share our pipelines, but what we have internally right now is we run our analysis pipelines, we get everything out from segmentation masks to marker intensities. And then we build all of that into OME TIFs. And then we have these web-based viewers and desktop-based viewers that are working with OME TIF only. And this simplifies our lives considerably because we don't worry about anything else. And if we can do this in Galaxy, I think it's a huge win. Again, it seems like many of these tools that are coming around for these big tissue atlases, these single-cell spatial transcriptomics and spatial proteomics are all using OME TIF at this point with ZARA arrays to back them for the record. So we have a tool that builds these ZARA arrays, which are really handy obviously, because then you can dynamically view these images. If you look at VIV in particular and look at the demo, for instance, you can do this multiplex imaging. Time not so much yet, but certainly you can view all the channels and you can even do slicing or move through a 3D volume. Yeah, thanks. So actually about ZARA, I mean, I don't know if you're aware of this effort to create an OME ZARA specification so that actually many tools are also going to move into using this as a kind of a next generation file format standard for images in the case. Because I think VIV already implements the ZARA or can read ZARA. So we had actually that at the back of our mind as a possible viewer, but yeah, I mean, we also want to have something that goes a little bit beyond just the ZARA and OME TIF because our users also are very close to the microscope and sometimes the things that they do is in various formats, let's say. We also have EM to take care of also. We're doing EM as well. I'm not necessarily arguing that VIV and VITAS are the only solution here. I guess what I am arguing is that, number one, we should try to standardize on a format. For the projects I'm involved in, OME TIF was what we do. And it seems pretty well supported and it also seems to have a fair amount of traction in at least other places that we're going to care about. So the imaging data commons, for instance, that's being brought up by NIH is going to standardize on some form of OME TIF to provide, for instance. I totally agree. And actually already having OME TIF and TIF in general would go a long way. I'm just trying to get to some of the other things that I know are going to happen. And then as we can all appreciate, once you have something that you agree on in terms of a format, you can build lots of different viewers. To be honest, I'm not aware of too many web-based viewers for multi-channel images. And this is why VIV is attractive to us is because it was easy to integrate into Galaxy. It works. It works as a service. It works as a React component. The test is the same way. And so it was the easiest for us to move forward with. The other thing about the test, to be honest, is it's more than just an image viewer. It's a single-cell interactive visual analysis dashboard. And that's really what we want. So my group is not necessarily interested in the raw images per se, but we're interested in the downstream impact of that. So we care about spatial analyses. We care about clustering and phenotyping, all the stuff that comes after the primary image analysis. And the test makes that so much easier. And it also gives us the raw image viewer for the microscopy folks that want to dive into it and look at cell segmentations, bleed over, for instance, nuclear segmentation versus cell segmentation, other such things. No, no, I understand. I just think that we have some single-cell projects. But they are more, in particular, we still have a high level of screening, like high-soup-put screening activities that takes different forms. And that can be, I mean, time-lapse, also sometimes 3D, which basically, we want to use Galaxy 4 because that's much easier to parallelize in our cluster in that case. But I mean, I think we have some good pointers here. But I think what I'm taking home here is that maybe we wait for another six months for the history to change. And but in the meantime, we can look at VTES. So Jeremy, is it available somewhere that we can at least try in our, or is it already in the public Galaxy? If you send me an email, I can connect you with the developer who's been working on it. I'm sure it's publicly out there on GitHub somewhere, but I'm not sure where. OK. And yeah, you're welcome to try it out. I think we've dockerized everything at this point if it's helpful. Yeah. VIV is in the main Galaxy release at this point. As I pointed out in the chat, VTES is not right now. But more eyes on it would be good because this pipeline has been incredibly effective for us moving from all the way from the raw image stack down to the point where we can do our single-cell analytics. And VTES and VIV have become essential to us because it's actually working remotely having to download images or sync them using Sombra or something like that is so painful. And so these web-based approaches for visualization have really sped up our analyses considerably. Yeah. OK. I'm afraid I will have to live in a few minutes. So just feel free to continue discussion if you need to. I think I got quite a lot of information already. Is that did you get also some answers to your questions? I mean, I do think my final point is unless I'm misunderstanding something, I don't think you need to wait for the new history for all of this to work. The test is a display application. To my understanding, display applications work in the current history as well as in the new history. And the test is actually a React component that we built into the tool. And as a result, you run a typical analysis tool in Galaxy using old history or new history. You get out this React component while this web page that includes the React component that you can click on and view. So I don't think the new history is a blocker for using either of these unless I'm misunderstanding something. Yeah, it's definitely not. The only the point there was that if you want to be able to click on the eye and then have a variety of views for a data set or whatever, then that's where that'll tie in. But just to visualize now. So I think we've already seen, or at least I think I've seen the VIF tool in there. But yeah, I mean, I think the eye is what some of our users are asking about because they say, yeah, it's nice. You can go and have a tool and things. But for quick browsing, the eye in the history is what we want. Yeah, so I mean, it's almost as easy as just not the eye. It just shows up as a link when you expand the history item. It'll save you at like the vents or something. And you click on that. So I guess it's one more click than clicking on just the eye. But so you have to expand the history item and then click on a link. And then that'll show it to you. But it's pretty close. OK. I wonder if the distinction there is like they didn't want to run a tool or something, right? Like it's sorry? I was wondering if the distinction there was maybe they thought they needed to run a tool or something to visualize it. Or it seems like those display applications are pretty easy to get to. Yeah. So the idea was what we had seen before is that you needed a special tool. So actually what initially was happening is that people would go to the interactive environment looking for the tools for viewing images. Yeah, I would definitely try the interactive environment, the display application that Jeremy was speaking about. OK. Yeah. Yeah, so it's like, yeah, we can do that. Is that an example of a visualization tool already implemented we can look at? IGV. Sorry, I didn't catch that. Well, I believe IGV is implemented like that, right? OK. I was going to say you could look at IGV as an external display application, but you could also look at the code for OpenC Dragon is right there too, right? And that's pretty similar to what you're going to want to do. But it sounded like Jeremy has an example somewhere, right? So I would just get the one that works for the exact display application you're trying to use for the file type. Yeah. All right, well, it looks like we've hit one o'clock. So thanks for presenting. That was a great conversation. Hopefully great things come from this. And thanks everyone for showing up. Yeah, thank you very much for all the pointers and information. I think you may hear back from us at some point whether it works or not. And yeah, thank you again. Bye. Thank you. Bye, Peter. Thanks, bye.