 Okay. So, thanks everyone for coming to the working group progress meeting. We'll go through each of the groups with some updates from the groups. Looking towards goals in progress towards GCC and any blockers. Here are kind of additional details and links for today's meeting. And I think without further ado, we'll jump into the first group. So back and working group. Yeah, I mean, so these were the goals for GCC. That's ongoing collaboration of the tools working group. We pretty much have fast API deployed and unspecified. I think it's putting in the last few tweaks, but use galaxy.org is now running with fast API. Yeah, I think the systems group didn't really mention a lot of things that need to be fixed on Pulsar so I assume that's, I don't know, either Don or it's in progress, but yeah, there wasn't much that had to be done. Yeah, I mean we interacted with the UI UX group with Sam to make sure that the API and points in use are optimized for performance. In the end it was mostly a question of how the front end will retrieve data. So I'm going to call this done but of course that's always a work in progress as always some things that could be improved. Okay. Yeah, I mean, since that wasn't very specific I wanted to outline and also the things we have accomplished so deferred data is pretty much complete so you can upload a data set that doesn't enter galaxies object store. So for that we have refactored the job setup code so it can now run without access to the database, which is important if the metadata for job isn't available there's user interface. There are salary tasks to materialize deferred data. So I mean if a data set is, I mean, not in Galaxy objects so metadata doesn't exist. The process of de deferring it is the materialization so that for that we have salary tasks. There's interesting use of salary tasks for running the actual upload job processing to run the metadata task and to run the job finish tasks all of that is expressed as a salary pipeline. It's a lot faster for small jobs because the salary code is kind of hot. It's now interpreter startup overhead so that made our API tests round about three times faster than they were before. I mean just as a sort of naive benchmark. We've completely eliminated the use of the old upload tool. I have a description in pay then take open API for the fetch data tool, which does a ton of things, some of which we haven't exposed in the user interface yet, but this is the first step. We can build dynamic resources, we have a whole framework for doing this now in salary so examples of that are we can create collection archives that way so it's not. We can export engine x or the galaxy web process that creates these archives. We can export histories that way. We can create invocation reports or the PDFs for instance that way, but there's also ongoing work to actually export whole workflow invocations. We have support for whiskey so you whiskey. With that process we dropped about 10,000 lines of redundant API code and also removing some galaxy interactive environment code that are now completely superseded by interactive tools. We've removed chat data types, we've removed the package installation code which are some of the legacy stuff from back when galaxy wasn't using conda. Yeah, there were a lot of work for API fixes and some of the workflow API something ported to fast API. So that's, it's the update from the backing group. And I mean collaboration is always ongoing with tools workflows UI UX. Have a slide for that. How does a user pick if it's deferred data or will be ingested. There is a checkbox in the upload. And is that that's in that's released or will be released or the other will be in the next release. And the I mean so you continue to work with systems group on pulsar issues, because you comment on what's working what needs to happen. I think the balls firmly in the systems groups. Park currently. So we're not really aware of things that don't work. Beyond some really really edge cases. But, you know, okay, we can say that maybe we can say that discussion when they present. Yeah. Excellent. Excellent. Okay. Thank you. So this is the slide we showed pretty much last, last time, and just go through it to reflect on which of the points we have achieved and where there's still some missing issues. So most important for us, of course, was a single stable history it's default now. And I think we exceeded the feature parity was a goal for the conference but I think we have exceeded that. And we also benefited as we expected from the unblocking of the development so much more people could contribute to it there are a lot of refinements going on it's really nice what developed and particularly the advanced filter options are good. And with 100,000 entries, it locally, it works well. The bulk operations was also a huge project. And that's also completed. And we adding additional bulk operations to it so not just delete all the data all the selected data but also change the data types I think it's also already in depth. And this is going to work great too so that will be too important features for the presentation we anticipated for the conference and have that ready for the workflow editor. We have, this is going to be 22 or nine but the code is already there. And what we'll do is we finally remove the workflow editor may come and embedded directly as view component. So that will make it possible in principle, or bring us much closer to the point that we can create an instant create an instance of the workflow editor without having to do API calls in fact because the data can be just provided as props, ideally. And when it comes to the next point first class display components for data and visualization it's still in progress. And all kind of everything is a little bit connected with this idea of removing makehose and jQuery and we made, I think, really good progress on it. And there will be even much more for the next release. But unfortunately we don't have view based visualizations yet, which I'm a little bit. Maybe it's not that critical but it would have been nice because we anticipate to have a workflow visualizations. And it will be hands on and it would be nice if we directly could show view based visualizations. But that's something we're still looking at if that will be possible for the conference so other than that I think the rest is non critical. We have the e2e end to end testing is in progress. The storage dashboard is in there in its basic version but it's fine. It works. And the next point the auto generated bindings for fast API endpoints is not available yet. I think that will take some time but it was good that we already listed this early so we can like plan and anticipate that too. And our final aspect we had here was we wanted to more better visualizations of the entries and kind of can start to connect the items in the history in a in a meaningful way. And particularly the first thing was here just to highlight inputs and outputs and that's partially complete. The UI part is entirely almost or is complete, but some of the data is missing so we get the inputs at this point. And then highlight them. If the API would also provide the outputs and I know we are working on it, then they can be highlighted as well immediately. Yeah, and then you can go to the next slide. Thank you. So we have some additional developments which are not worthy to we have replaced the scratch book with much more modern window manager works much better. So we have an upcoming replacement of the backbone page layout so that's how the initial pages actually at the root are constructed with backbone routers and so on we replaced them already with view and view routers so there will be a much more consistent architecture for how the client is constructed. A lot of things much easier. And in that context also of course we remove my make host and jQuery and make very good progress I think on those items to so hopefully there is not much more left I mean, I think a biggest jump on that topic is going to come for the next release There are very several refinements of the history appearance little details, but they come with a lot of cool tests. And they really although it's just a touch of the sometimes it seems like a small change, but it made the entire history much smoother and I really appreciate that people contributed these like small but very smart refinements to it. We have significant improvement of the tours. Also other people contributed also from the back end to improve how tours are handled and that they're more consistent or the selectors are so that was also pretty good that was a good collaboration I think. And then we have other improvements to the UI or around components and location views. Some are still in review some PRs but these developments are coming and mostly we worked really closely thanks to Marius and Dave and also others and john very closely with the back end group of course, and it would be interesting to see. It makes sense in this stage of the development. It would be of course interesting to see if we can collaborate with more groups in the future for visualizations on the previous slide. Do you have kind of a driving examples. So some of the vision, some of the visualizations that we currently have, you know, if you go to visualization page, they need to be refined. Yes. Yes, and this is in the same context exactly. So basically what this particularly emphasizes is that currently the visualizations work in such a way that a function is called and then we. It doesn't have to be backbone but it's mostly a backbone class which is then loaded request the data, and then it throws renders, however the visualization wants to render the data. And I think we need if we would have view based visualization so it can still be a function. Which calls but it should produce a view based visualization I think with that, we could then revisit some of these visualizations which need refinement embed them into this better structure wrap them as view and then clean potential issues out I think it would make sense. Do you, it would be good maybe to follow up with the with the list I can probably also go through all the visualizations. I know somewhere already mentioned, don't want to take the whole talk on that topic but it might make sense now regarding your comment to have a list and make sure that they are not major issues with any of them. We can discuss this during the UI group meeting. Next. Sounds good. I think a connected thing is that the visualizations themselves should also show up when you click on the icon. I mean that's something we discussed in Montpellier. It's currently if you click on the icon everything is defined by the data type. Also rendering is actually sort of server side. You know, if you don't have a way to click on the icon and display the data raw visualization should show up in the center panel. Yeah. So we do have a clear plan and it's connected. But yeah, that's was a pretty good description of it. Also my understanding of it. Well, allow me to also say, thank you and congratulations for getting the new history display. Huge huge huge accomplishment. Yeah. Yeah, just everything because you know, we were really worried about UI half a year ago. But now it's well, if I'm in a really bad mood, and I want to cheer up I go and I look at this and it's fantastic so thank you. Thank you for the, the entire UI. Thank you. Thank you. Okay, that's us. So our, can you hear me. Okay, so our goals slide roughly reflects what we had in the previous meeting with a few additions. So first of all, we are conducting the release testing for this release. Everything is underway. The team is formed to have a great team I have to say that almost everyone volunteered this time and actually one person couldn't even make it simply because of scheduling conflicts but he they volunteered so that was a very pleasant surprise. We are going to start testing any day. As soon as main is updated we're ready to go. The testing plan is done we have identified all the relevant PRs all the relevant high priority items to test we have our heads, our hands full. It's a huge release, but it looks like everything should be according to plan. No surprises there tutorials and presentations for GCC. So the one change compared to what we initially planned three months ago. We were going to do a talk on how we test in galaxy in general we decided to postpone it simply due to prioritizing. We don't have the bandwidth for that this time we will do it eventually it's a massive undertaking and we decided to focus on the two new tutorials or one is completely new and it is huge the other one has a major new section in it and then we had a lot of infrastructure updates. So the other two items for GCC are underway as planned one is the new tutorial how to write automated tests. This is being work on the other one is the new section for the how to test how to add a new feature to galaxy tutorial and the new section is how to test the model. And in addition to that we had a variety of testing overall testing related updates updates to the testing infrastructure and lots and lots of tests, thanks to most the narrows and john. So let's see we had enhanced test enhanced testing infrastructure and documentation that was primarily done for the testing of the model and my creations that were approximately I don't know 1700 lines of code deleted 1700 lines of cone added these updates are mostly boring they are a factor with their critical for the testing infrastructure overall. Everything else that john maybe you could help me describe this in more detail, the remaining five points. I don't, I don't think any of this is super high level. You know we added, we're trying to unwind the tool shed so we added some API tests for that and a framework to run them. We added much new GUI testing to our testing. I think that's enough. So yes, a lot of workflow tests that test for workflows both selenium and API, some infrastructure for just tests packaging tests and the rest john has already mentioned so that's our update. And probably question actually to me specifically so are we updating main with the with this today. So are we going to have new history on main today. I'm trying desperately to do that but we keep running into small things that have to get done before we can finish it. This may be an impossible question to answer but john do you have a sense of like what coverage we have for testing. I'm sure functions are well tested but I don't have a great sense beyond the. I do not and I should have that number. I have it. I mean we run it on every test. I mean, we don't really have cold coverage for the front end but for the back end, and then it excludes a couple of things it's about 62%, something like that, which is pretty good coverage of the main branches. Yeah, I'm sure. I'm sure. Is there a way to kind of identify, you know, kind of out of the remaining what 38%, which of those API calls are kind of most popular to kind of prioritize them. Not really. I mean, with additional instrumentation probably but it's also hard to say I mean, if I mean so there's code coverage where you can see sort of line based where we have coverage where we don't have coverage. Nothing of high importance really strikes me as not having coverage. There's also a couple of things that we only run as external scripts. So those never receive coverage so I mean, I think the effective coverage is even higher than that but you know that that's not what we measure. Got it. But I mean the fact that you came up with a specific number 62% says that you've, you know, you've tried to measure this. Yeah, yeah, yeah. So I mean, what we're doing is for every test that runs on GitHub actions. We also measure code coverage. And, okay, so for the project total that's 61.64%. I mean, I can, I can drop that in here and you know if you're really interested you can, you can go and like open the individual files and folders and see where we are so for instance, Galaxy has 65% coverage, the tool shed has 11%. So that brings it down to 61%, right. And then you can go to the API, for instance, and see okay where, where do we lack coverage. The API itself actually has a higher average coverage at 72%. The controller methods. So these are the old style kind of server side rendering they only 35% coverage. Because we just, I mean, we got better writing tests over time. That reflects this I think. Okay, great. I mean, I'm delighted to hear that there's sort of a careful accounting of those. All right, that's me. So we are mainly been working with other working groups is our primary mode of operation for us for the last little bit and part of that is projects, the tools working group has been very heavily working, primarily with with projects which is something that we're going to continue to do, but it's going to be much less from, from the point of view of the tools working group attending something so much as the tools working group consulting and helping out as necessary. And there's, there was a discussion in Montpellier about the about the groups being for the project rather than rather than the tools working group showing up to everything. So that's going to be the push we will still have that on those. And these are the ones we are currently involved in we're going to start working a lot more with the IWC Marius and I are, have been planning to meet on that and haven't specifically done that yet but we keep intending to. But in general, we are continuing to work but stopping being the primary people attending these in favor of the projects being primary. So, next slide on that piece. I made this before everyone started naming it goals for GCC. So, in general, we are currently working on reorganizing of the tool shed using the using bio tools. We have a an Excel document that we shared on some of the channels a little while ago, we are about 90% of the way down through that, in terms of tagging tools that did not previously have bio tools tags. Tyler, who is working on that with me and I should be done with that by the end of the week. It's the goal. There are about one, what about one in five of the tools on that list do not have an associated bias. The question here is, are we adding EM tags are we adding new bio tools entries, we're going to work with that a large number of those are simply galaxy specific tools, like text reformatting things that don't have specifically added bio tools or just our versions of a bash scripts that we have added into galaxy. So the question is to be wants to add just the generic galaxy bio tools entry, which we can use and that'll clear up about half. Of the ones that don't have the tags and we can focus on from there, or what we want to do with that. As well as the design for a method of tool browsing outside of a main server, so that someone can can design your own tool panel, not necessarily for use on one of the main servers but more easily being able to pull in a large number of use on their own instances and have that as a built list as a built toolbox toolkit to pull in for their instances. We don't have the specifics for how this is going to be particularly implemented but that is the current request and we are working on figuring that out. Any reorganization of the tool panel. Do you think we'll be able to have like a first pass on this by GCC or how much help do you need and all the reorganization of the tool panel. We already have the ability to have to have specific tool panels and we have the implementation of these tool tags are ready. They are, they are something that they can exist in the wrapper already. So, from what I understand and this one's much more of a discussion with between us and the UX group about how to implement this. The organization should be fairly simple, but I don't, I don't want to be the one to speak on that. The answer is that we should make it feasible by GCC, but I don't know if, if it will be implemented by GCC. Okay. And then Alex, I mean presumably this new tool browser site would also feed off the UI team to kind of help with the user facing components. But when you say design, trying to design the UI or sort of just sort of give sort of a conceptual design about how that would work. I think mostly it's going to be the conceptual part because we have standardized method standardized UI methods at this point and and looks. Yeah, I don't think that that it is the tools working groups function. I don't think it's the greatest idea either for us to be the ones design UI side of that necessarily. I mean, I don't know. I mean, I don't think we discussed it in Montpellier so we should have another discussion about this but one and I mean one reason to have actually these two panels is that you know where to find things so if every user has their own tool panel I don't know if that's a good idea. That's why I was saying not to implement it on on the main servers but for someone to be able to go to this tool to this tool site and say I want to take everything with a bio tool that is associated with this tag and throw it with these three tags and come back with with a tool XML that I mean a tool demo that they can put on their own server. I see. Right now this this that part would not be implemented mean this is more for someone to be able to explore on their own and see what galaxy has outside. And it'll also make it more easy for if a user needs to add a tool they don't have to necessarily go through our GitHub repo or the tool shed itself to see the tool they can see it and they can make the request. Obviously we're not going to be implementing every single tool that everyone requests on to all of the major servers, but it lets them see what we have available. So I think you're forever close. So the goals we had. I mean we only partially got there. So the main thing was that we wanted to have a collection of 10 sort of showcase best practice workflows. And I think we can still do this by GCG. It's just that attention went into developing things on the infrastructure side. Nevertheless, thanks to the collaborations we had the two PGP workflows that already merged and published. There are four PRs in progress. So we have around contributed a generalized version of the variant calling workflow that we had initially developed for South Cove to and I mean, we need more contributors. We've over were overhauled a little bit the contributing process so it's easier than ever. So few if you know how to run a workflow hopefully and you know how to use plenty more hopefully you should be able to do this relatively quickly. Yeah, we had this idea of integrating notebooks and conditionals a little bit better. We didn't have time to get there. That was so we also wanted to have parts of the workflow editor available as a standalone component. That's in collaboration with the UX group but again, there was no developer time to actually do this. We wanted to do a dedicated section of Galaxy workflows in the Galaxy interface section of the GTN I think that may still happen slash there's some work on it I know I've signed up to do some of that. We wanted to do tutorial about workflow reports that's done. Yeah, I mean, I mentioned planning a workflow test in it. That's a really cool command that once you've run a workflow with small data, you can just point it at an invocation and will generate the skeleton of everything that you need to test that workflow. And I mean, ultimately that's like 90% of the work 95% of the work to have the workflow be best practice and in the IWC. And yeah, I mean, that's where we're at. So in order to get more contributors, have you thought about looking at recently published papers and reaching out directly to authors? No, I mean, I don't know. I don't know if I want to do this. I'd rather go and pick good workflows from other communities and just do them in Galaxy. That sounds like a much, much more streamlined effort and you know, I mean, we sort of know what to do, right? I mean, we need a human variant calling workflow, we need a GWAS workflow, we need an RNA seek workflow, we need a TIP seek workflow. So that's kind of, I think, our core area and I mean RNA seek like assembly, transcript assembly, transcript quantification. Do you have a list of these other than in your head somewhere? Yeah, well, it's GTN. Yeah, so I don't know. GTN stuff. Might not be the best. I mean, so there's a couple of different things. So it's well explained, right? Why certain things are done. We'd still have to do a bit of review to see, you know, is that still the best practice in the community? I mean, you know, is it all set up properly? Because you know, I mean, those workflows, they are basically just extracted from the history. So that's not necessarily how you would run a production workflow. I mean, you're not going to do QC at every step, right? You just collect all the QC reports. I mean, yes, we should definitely link them up, right? Yeah, that's possible. But I'd go with something that is benchmarked already. So I don't know all the NF course stuff has something to say. So like next flow workflow. I think there are several approaches here. One of them is so what you're talking about right now as a complete kind of a best practices, whatever we call them activities, like the whole, you know, attack seek analysis, but we also need to work on actually not popularizing small work close to do relatively small but cool thing because there are a lot of quick things that you can do with Galaxy that people don't even realize. And I'm actually, I really want to find time to start doing this, because there are, I don't know, we can do like one a week, because there is like endless supply of these just just I honestly think we can do 10 a day right. Exactly. Well, let's be realistic. Because we need to do this at the level of, you know, they need to be understandable by our consumers, which is a hard task. Yeah, I mean we have we have some examples of small workflows right so we have for instance the fetch data workflow. Just provide a list of accessions that parallelizes over the whole list of accessions and at the end it spits you out a collection with a pair and data sets and the single end data sets. Yeah, I mean, I don't want to have to buy time but it's just because, you know, with things like monkey pox for example, a quick analysis that you can do. It's amazing that you can do with Galaxy it's just people don't know about this that's our biggest problem is that we have all of this or I guess people don't know that you can actually do it very easily. Yeah, I mean, I agree. The other thing is like good description. You know what are the important parameters. How much resources do you need right if you don't run it on.org. Well, it's like home improvement videos, you know, sometimes you sort of how do you fix the goddamn door. So you just you go and you look at business like, of course, you know I just need to change these two screws. So I think the, some of these workflows need to not necessarily concentrate the parameters but just explaining general idea it turns out, for example, that if you want to, like a build a global phylogenetic tree of all current viruses, you don't need to do multiple alignments. You can just pick representatives covariance construct consensus and you already have multiple and you just built three on it. It's a workflow from six steps. So these kinds of things it's this is exactly how home improvement video time. Yep. Sounds like a great collaboration fest topic. Definitely. Yeah, if we can also pull off sort of giving some improvement videos that sometimes find because the people who do them they kind of, you know, they're a little bit weird and they do things in a certain way. I guess we need to need to also try to do presentation some funny way that people actually want to watch this but it's easy to say. Something like that. I mean another thing that we should maybe see is if I mean, I'm not really involved in the elixir thing but I know there are you know scientific communities that focus around a certain thing and it also sounds like Alex is doing pretty well there. So it would be great if these you know scientific communities were actually doing galaxy workflows and contributing them. We can give credit now easily. We can even have them, you know, just be collected under the IWC umbrella in in dog store, they can still be hosted by them I mean all the credit goes to them. We have the creator metadata we have the organization metadata. Maybe that's something Bjorn could get a foot in the door. I don't know. They have the work already. But as a general point we got to tap into the community to encourage them to contribute because we'll just never have enough resources to, you know, engineer other people's workflows. So we got to get community engagement. I mean, I'm sort of thinking like you know sometimes there are these live coding things on, I don't know what I mean I never ever watched one but I hear they're popular so maybe we should just do like, Okay, we're going to live build this workflow, because I think I can be a killer argument for galaxy. Yeah. All right, I mean, I think taking up enough time here. Thank you. And I think you're next. Sorry, I was muted. Hi so the main goal we had before DCC was to develop the hub that Nick has been working on. So the goal is to host all the hub content for all instances on on dot org. So the hub is live on project that you slash you slash event. It hasn't been announced. So the goal is to make it public during DCC or just before the DCC for people to find the information. There is some work to be done on the content and some features to be added. And the long tail of the project is going to be to port old content and feature into the new hub, which likely will get is going to take month. Next slide is outreach. So the intern started on May 30, 30th. So we have three interns one funded by us and two extra funded by outreach events. The topic that are that this internet working on our development climate science and well being in the community. So they just started a couple weeks ago and they'll be there until the end of August 2022. Next slide in outreach news. So the paper cut have been canceled. Instead, we're going to organize regional confess about four times a year. We're going to assign tickets for small feature development in working group meeting. If they are needed. And we have the community called that are organized every two weeks. You can see that on the working group calendar on Google agenda. So the training compared to other year is in prior with the conference instead of being three days before the conference. So we have five track for four days so about 20 different trainings. And we're going to have a confess of three days and we are working on identifying for working group and three level project that we can propose to people who want to participate to outreach and training. Projects. And that's what it was. How is your stress level over GCC. He personally is going to be. It's going to be good to have training in person finally, I think this relief at least for me a huge load of stress I'm most stressed by online training. Otherwise stress is super high all the time so nothing to compare with. Yeah, I'm really looking forward to seeing everyone. Yeah. Thank you. So just a small bit of update here so we've been going back and forth trying to schedule a meeting. Looks like it'll happen Friday or Monday. With the use galaxy dot star server admins and other people involved and new one wrote the TPV. So this is what we're going to use to do sort of higher level scheduling. It's already in use and use galaxy Australia so you was probably going to deploy it next and then, and then hopefully I will before the conference but we'll see got a lot to do. We have the new stack in progress and in deployment on use galaxy org so as of yesterday. Before we were running G unicorn and fast API with gravity which is our process management config management. Cool that we that we wrote and have been putting a lot of work into in the last couple of months. And we are attempting to update 2205 for the latest dev actually because we haven't branched yet today but having to build some wheels because they're not available on in pi pi. And that'll get us through history. So I said maybe done but it's not done as of this call but hopefully by the end of the day. And then Keith has been working on home updates for the Kubernetes deployment a galaxy. So our questions that came up earlier. We don't have any outstanding major issues with pulsar. And so there's nothing really to work with currently but certainly push those over to the backend group encounter any issues. Could you say a bit more about the TPV and Galaxy has sort of built in it has very limited functionality and controlling, you know how you map tools and executions to particular destinations. So you can say like, you know this tool BWA, for example all BWA jobs run at this particular job destination and those have to be like statically defined with a certain amount of memory and cores and stuff like that. So there's also in Galaxy a dynamic destination system where you can write, you know, sort of arbitrary Python code to have much more control and make smarter decisions about how your jobs run. And so lots of Galaxy admins have written dynamic rules over the year over the years. We've had multiple iterations of them. A year so ago I wrote a brand new one that we were using on use galaxy.org. The EU uses something that Helena wrote called the sorting hat, and then Australia had some homegrown stuff but needed a better system and so that's what this, this TPV is. It's, it's a nicely designed system it's very modular designed to be general purpose, so that anyone can use it and plug in the components that they need. So, for example, it allows you to write your own methods for determining what the usage on a particular cluster might be so you can decide I'll send it to the one with the least load, because we don't really have met a scheduling, you know, there's no, there's no layer below Galaxy that's like, you know, I'll just submit my job out into a cloud and that cloud will figure out like the best place to run it. And so, you know, we're always having to try and figure out how to do that best in Galaxy. And that's what the TPV is supposed to solve now. When you say it's a global scheduling will there be exactly one TPV for every pulsar or will. There'll be one per Galaxy server. Okay. Yeah, so they can tie into the same database for example like so Australia is using their essentially the stats that they put into influx and then and then serve with view with Grafana and stuff about their cluster utilization in order to query and figure you know what the best destination for a job is. And, and so we can, we can all on all the use galaxy that star servers potentially tie into the same database to figure out. Okay, you know this this pulsar destination has a backlog of 100 jobs so we're not going to send any there right now. I mean it sounds amazing. What's your sense of what will be available by GCC. Yeah, I think it's a good piece of software the difficulty for both EU and, and you know the US server are that we have, you know, I have like a gigantic file that of all kinds of special cases about how our tools run currently. And you know we run jobs on stampede attack we run on bridges at PSC we run on front arrow, you know, jet stream, we have like six different clusters that we can run on. And so porting all that logic over and then also saying like here's the amount of memory and cores and so forth for each destined each tool at each destination that we might run. That's a lot of work. So that that's the main thing I think the software itself is great and is going to to fully fill the needs that we have. It's just the time to get it up and going. So does the TPV then work with standard clusters as well, not just pulsar based or. Yeah, so it's just that it itself is just another dynamic job rule. So it's above that entire layer. Okay, and all the TPV spits out is use this destination. Okay, excellent cool. And one comment about updating today if you don't update today you're gonna update tomorrow before a long weekend on a Friday sounds scary. Yeah before I'm out for vacation next week. But yeah I will because it's got to get done so it's whatever is broken. Just leave us some notes on how we, we shoot the thing in the head and restarted. Yeah, the clusters are the playbook is not 100%. Just if there are problems, you know, find me on a boat with no internet. Now I won't be on a boat with no internet. And the boat with very little internet. I'll have excellent internet. Hopefully we don't have to use it. Thanks everybody for organizing those updates. Yeah, sorry are there any other questions. I mean, no question but regarding the TPV are somewhat related is I think we're also going to get the resource requirements into the galaxy tool language. What you do is describe minimum maximum amount of resources, as well as expressions. So, I mean, I mentioned this to new one, and we thought this would also connect well with TPV so that we have sort of, you know, tool author defined default sort of the tool author knows what's going to work well with this many course this much rum. Expressions would allow you to sort of say okay based on the input size, you know, you could pick sort of the maximum course based on the input size for small file for an aligner you may not want to reserve 128 core node but if you're aligning like 1000 x coverage human genome you may want to get that 128 core node. I would already do this also within galaxy and it would remain possible to overwrite this but I just wanted to mention that this might be a good place to put sort of default reasonable resource requirements which I think are very handy. If you're running for instance a Kubernetes cluster I can auto scale. You can you can get reasonable defaults by default. That's the idea. Awesome thanks everybody. If there aren't any other comments, maybe we can close out an hour and three minutes early. I think it's good. I look forward to seeing many or all of you in what about a month's time Minneapolis. You know as we kind of finalize demos and things like you know obviously we'll need to stay in close communication but it's just been outstanding progress. I'm really excited about all the other accomplishments that were presented today. Thanks everybody. Thank you guys. Have a great day.