 Okay, there we go. Hi everybody thank you for joining today's working group progress updates meeting. Today, we're going to go through each of the groups having representatives present what you all have been working on some of the things that have been accomplished goals looking forward towards GCC 2023. Any blockers and then anyways but kind of working across working groups is in your plans. So before we dive into that I just wanted to give an update about that galaxy working groups spreadsheet that I sent around that I asked everybody to update and then how did you down if you didn't. So, really the goal here is to let the working group leads and the PIs know the kind of work that you want to be focusing on in galaxy. And so with the structure of the working groups this is what also involve kind of attending the groups and participating at those meetings as well. So I'm going to make sure that you have access to all the right resources and being in the right places like the working group meetings and channels and kind of relevant issues and PR is etc. And also kind of making sure that we have the right view of how much you want to focus on a particular activity so this is just a screenshot of the working groups and the projects that we have. You know, you might want to be devoting 50% of your time to back end. And that way we make sure that the back end folks don't assign you things that will take up all of your time or projects that are, you know, too small so making sure that there's the right level of tasks they're assigned to different folks. And as we've been kind of discussing the spreadsheet as well. There are a couple of things that came up that we decided would be really helpful to add one is a column of galaxy and non specific tasks. So maybe you're working on something that's outside of our kind of structure of working groups and projects. So it would be helpful to, if you have any updates to your activities to make that update there, where there's something you might be working on. That's still related to galaxy but doesn't kind of fit into one of these existing, I guess, places. And then also, there might be things that you're working on outside of galaxy so the point being there that your effort doesn't necessarily need to add up to 100. Other focuses outside of galaxy. But really this is just to make sure we have kind of an appropriate view of who's participating where to make sure that you're involved where you want to be. And at the right levels. Any comments there. All right. If not, then we can dive into the updates. So first up is testing and hardening. So I just wanted to remind everyone about the testing and hardening group that we're, you know, our sort of main focuses on producing tests and content so less it's less project driven than the other working groups that a lot of us are involved in. What we've done since our last report in October we did the release testing of 23.0. The link in this document has a nice community presentation that John and the rest of the team put together. That's really great. We added deployment tests so that we can run our all the tests that we're producing or a lot of the tests that we're producing against the public servers and you can build this into our release process and make sure that things are running smoothly before we do deployments after we do deployments before we do the manual testing, etc. The exciting project we did is as part of this tool shed replacement, they'll talk a little bit more about in the back end working group. There's a playwright testing infrastructure that replaces all the older tool shed test stuff. And then other than that we've done a bunch of new tests and like just sort of, you know, the type annotations the things we're doing in the back end in the front end groups, just sort of migrating code and updating infrastructure. And that's what we've done since the last report. Okay. Thank you. So GC, what we with regard to items we plan to address prior to GCC 2023. So first of all, obviously we will conduct the release testing for the next release 23 one. We have considered expanding the testing tutorial we have a fairly large testing tutorial which covers API and unit tests. We're planning to add a section on either end to end testing or integration or client testing. It's a large testing tutorial, it's good as is but growing it will make sense because that tutorial hopefully helps other contributors, especially new contributors add tests to their contributions to the code base. So ongoing work on testing infrastructure so our focus will be obviously on deployment tests, as John has already mentioned, in addition to that. There will be much rewriting of the database access code in the testing infrastructure and that goes hand in hand with our work on migrating to sequel alchemy 2.0. So that of course will include adding much documentation to our database access code in the testing infrastructure. Systematic improvement of test coverage so in addition to obviously writing more tests. We always think about how to simplify the test writing process for new contributors so with that regard. We're going to, well ourselves we're going to prioritize features that lack test coverage and features which are important and also unknown to break so this will be a focused effort. Sam, thank you for coming up with that idea we've discussed it many times in the past but finally we put it on the roadmap. As we will be improving documentation on galaxies testing utilities specifically the database access utilities they already being used by new contributors, and that's not very straightforward code to use so since we're rewriting that will be adding much documentation. I think 231232 and ongoing, probably the main changes up, well one way selecting the team in advance. And again the link to current schedule contains a listing of all the potential testers so all the staff members who can participate in release testing. So again I will be contacting the PI is one of these coming days, just to make sure we're not missing anybody that we're not missing any teams. Deployment testing will be conducted will be conducted prior to manual testing and that will help us focus the manual testing efforts on the things which cannot be easily automated. In regards will be putting even more focus on the key features, most of which appear in user facing release notes. So with that regard will be working the release team release manager will be working together with the authors of these features to generate some kind of explicit descriptions of the features so that the testing team can dive, dive deeper and to focus even more on making sure that they work as expected. So long term goal not scheduled as of yes, not as of yet, no bandwidth for that yet performance testing load scalability stress testing for tolerance testing where we're well aware of these items they're critical they're important and we'll get to them. As soon as we have enough bandwidth. Thank you. Thanks, Marcus. I'm interested in everyone's opinion on the, you know these release notes, because I mean ideally everything would be fully automated but that's hard for certain components. I'm wondering john maybe you could say a little bit more about what sort of information you want to see in those notes and how that will guide testing. The release notes themselves. First we, we wrote them prior to release testing then we started to completing them after release testing. We don't need release notes per se as they appear in their final state in the release documentation. What would help is having an itemized list of a description an explicit description of a feature which we are going to include in the user facing release being a key part of the release what makes the release so not just we have a new history in this release but the new history allows you to do XYZ and does ABC differently from what it has been until now. The key idea is to shield, not shield but probably help the release testing team focus on trying to break these items, as opposed to coming up with a list of what this feature needs to do based on related PRs. So again, I'm not seeing this as we need the final release notes in advance but more like we need a list of things to test so that the release team, many of which sometimes are very new staff members who don't have much experience with the release code base can have a specific list of items to address. But it could be as simple as almost like a bulleted list of like, make sure these on a fourth or five things you know behave as you expect. So if this is the list of four or five things we expect from this feature this is what we say this feature does and the release the release testing team will try to come up with creative ways to stray off the, you know the beaten path and break them. But they need to know what to break. Okay. That makes sense. And again, I see you know that as being a major need for things that cannot be automated. There's also, you know, unit tests and integration tasks and other forms of testing that we can do to automate I guess the kind of the thought question is there are other, I mean you call out, you know performance testing as something that could could potentially be done are there other types of testing other opportunities that you see, you know, ultimately, you know, automation is the key to get, you know, good coverage over such an enormous code base. Probably. I think that's an ongoing thing right we we've been adding sort of end to end tests integration tests tests of different galaxy configurations. For a long time, I mean, workflows have tests tools have tests. I think that performance is called out because it feels like the one piece where we consistently don't have a way to address problems that come up. And I think all the other issues that we encounter we usually, you know, if there's a, if there's a testing lack, we usually kind of can point at the PR and say well, there should have been a test there and there wasn't. And that's going to happen right we don't have. We don't, we don't have 100% test coverage and we don't insist on it, but, but we do encourage tests of all sorts and we have infrastructure for a dozen kinds of tests to handle. Nearly every situation for these bugs that are arising. I'm with you, I mean with you I mean this is going to be an iterative approach for, you know, at every step along the way we'll just continue to kind of make updates. I'm pleased. I just wanted to ask the question if, you know, as you're going through this if there are other types of tests that are potentially relevant but it sounds like we have good coverage and multiple sort of multiple modes of feedback about if there are gaps about how that could be addressed. As John said there are at least a dozen different types of tests we can write and there are that results in thousands of tests that we automatically run on each PR and now with each release. So I suppose one of the goals one of the overarching goals for us as a team is to simplify the process of writing simpler tests for contributors so that you know we don't have right all the tests ourselves. And we'll encourage best practices like writing unit tests together with writing any addition to the code base. And at the same time, simplifying and improving the testing infrastructure which is something individual contributors usually can't really take care of. Yeah, actually, I changed my mind that we did, we didn't have deployment tests like we weren't running tests against main prior and we still kind of the things that we've added this release were, you know, they're still pretty experimental and we need to really polish them but that is another place where things sort of would break down. You know, you know you add in in genics you add in salary you add in these pieces that that might be missing or might be misconfigured. And so I think the deployment test is another place where hopefully over the next year. So when you start thinking about like, how would this component perform on main, or on EU, or on AU, and then what could we write tests that are, you know, geared towards, you know, what is this going to look like under in genics or something like that. And there are some key pieces of the code, you know, things that are exposed in the UI and in the API where that will come into play, and that that will hopefully also improve. The database related code at all. There's also a constant challenge and the thing we try to improve a whole to improve it this time, and with regard to testing so whenever code test code needs to talk to the database that's a challenge the way we had to look currently in our integration tests is we, we load up one database and throw stuff at it. The other side is, well that makes the test not quite independent from one another so a failure in one test or some weird thing happening in the database as a result of one test can affect the next test. At the same time, tearing down the database and putting it back up for each test is tremendously expensive that will translate hours of testing into 10s of hours of testing. So this is something we're working on also migration testing we don't have we have plenty of migration tests but they do not address. They do not check adding what happens to galaxy when you add a migration and that that has been the result of at least one bug. So we we we that's on the that's in our plan to add testing infrastructure which will enable should run automatically whenever the database schema changes. Great. I think in that interest time we should probably advance to the next topic, but excellent work. Thanks, John. Up next we have the workflows and tools working here. Yep. All right, so we have a few things that we have implemented over the last term with conditional works workflow step execution so you are able to have conditional inputs where the workflow will run if. If input files are not required but can alter the workflow run if they are included, as well as staff JavaScript expressions for the API which will allow you to manually write in those Boolean inputs. We will be getting more into that for for our future rules for DCC is to further one of them is to expand that. We have improved work reporting for workflow scheduling isn't and warnings it's now a whole lot more granular where you can see not only that the workflow failed but when it failed why it failed and what step it's failing on what what the specific issues are. We have error codes that are much more visible to the user for those a reactive workflow editor with clear state flow. It's, it's just a lot more visible what what problems are as you are as you are generating it for the user to to be more able to react to it and not hit issues. After the fact reusable IWC GitHub workflows, significant enhancements to the IWC infrastructure such that testing of new workflows is much more simplified and much more expandable so that people who are not directly involved in the IWC can use said infrastructure on their own workflows and they don't have to be submitting everything to the IWC they're able to run in the same testing framework without having to necessarily go through us every time. Also, able to export RO is report object crates such that people can export the entirety of workflows and workflow outputs, along with all of the metadata and input data and outputs, all together in a single object, so that a user can have a greater it can be more easily distributed to other people after the fact he makes it a whole lot more easy for people to understand the whole situation under which the workflow was run initially. Next slide please. Thank you for the next for GCC a standalone workflow graph view such that the user does not have to be able to go into the workflow editor to view what a workflow will actually look like, and zoom out and try to understand and take your picture specifically in the workflow editor for any time you need to put anything on a presentation. This is just for workflows that have been exported from galaxy but also for static pages for progress views, just that anyone can really understand a workflow, much more easily. So, in terms of script expressions, we're looking to expand this Boolean function that has been added for VGP functions, and as obviously usable by many other groups, but we're looking to have more complex expressions such that we can check metadata and and actual have multiple conditionals together to have further conditionals and workflow and make them much more extensible so that you can theoretically have one workflow that could run for a variety of versions, as opposed to having several versions of a workflow you have to distribute. Executable workflow editor tours this is similar to how we have tours on major servers right now. And it can walk you through the issues that the workflow editor being so dynamic. It's been a bit of an issue, and we're looking to expand it into functioning in that system. Improve sub workflow maintenance user story so that is that currently, if you have a sub workflow that is a part of a major workflow, anytime you edit said sub workflow or distribute said, or distribute the larger workflow. Sub workflows are not attached. And that can be a bit of an issue if you are making fixes to the sub workflow or changes that that need to be reflected across several workflows at the same time so we are looking into linking the original sub workflows to the to the larger workflows so that a user can not have to edit all of the workflows in which the sub workflow occurs, or all of the instances in which said sub workflow occurs. Excuse me, looking to change the execution of all workflows and tool tests to using pulsar by default. It's just going to be much more simple if it's all using a single framework, and it'll hard and it'll allow us to harden pulsar support. Improve support for job caching framework currently on the back end. A lot of things reply rely on, for example, the HID, which can be a bit of an issue when you rename files or using a new history where the HIDs are not standard. And that can cause issues and we are looking to make that a lot more supported and hardened so that that those issues don't arise. Reports and document more workflow APIs to fast API, again, better framework more standard schema for we wants to come up with a more standardized schema for job and test definitions just to make it more easy for people to write these tests that it's not just a couple of people writing and more available for everybody to use. In terms of tools part of the working group, further, we're looking to make further enhancements to existing tools and addition of new tools to sweets for single cell tools. We're currently working with the bacon lab out of the UK on that. So, sorry, for example, is working on the fate tool tool suite. We're looking to just continue supporting projects as we have been and we're looking at including more t to t pipeline tools. Thanks, Alex. Question one question about t to t pipeline tools. These are in widow, aren't they? I believe so, Mike would probably be more pleased to answer that. Yes. But widdles by definition are simply a pipeline mechanism so, though it'd be difficult, we could theoretically implement those in a galaxy tool framework. Okay, and my question about tasks so maybe I can talk to. So, the problem we're having with the GP or close is that the test data week so I don't see why we can just run it against some big instances so that's what you're trying to do with the external test support. That is a big part of it, yeah. Thanks, I'm, I'm really curious about this JavaScript expressions that must be wonderful. Thanks. Oh, I was just saying I very much agree once those are implemented it's going to be really useful. So, obviously there's a lot of really cool work here, like the conditional workflow execution and so forth and I'm glad that we're taking a look at sub workflows and how to upgrade those and has a lot of thought been put into how to do this with versioning because you can you know you have one sub workflow that you now use in multiple places and you go in you edit it, and then maybe you want to update all of the workflows that are using an updated version of that sub workflow, or you don't want to update all of them. And for the ones that you don't update then, when you go back, you want to use that old version, now you have a, and then you do want to update it but slightly differently, right so now you have some sort of branched sort of, and how do you then go back and access that through the editor work, you know the editor interface. We had this exacting discussion last time I believe that there are two to work, two things to deal with that situation. First is the manual linking of the sub workflows so that it doesn't automatically update you have to say and I want this to auto update from this workflow. And second is is workflow versioning, because it's running it as a workflow as a tool within the pipeline. So if you can select a version of that against it and and pin it that would also work. Yeah, yeah, no I think I think it sounds really great and but it's, I guess when you're thinking about the right now we we when we have workflow versions that's just it's a one to one to one right, but if you can imagine that if you want to go back later on and start from a different, you know the original one again for a different workflow. Now you sort of branches and you know I don't know how that would then be supported by the model obviously it's possible but it gets complicated I guess right. Yeah, I would say that we have, we have a short term, a medium term and a long term solution there. And we've discussed. I mean, I think the long term solution there's an open issue that I created, you know that these workflows should be viewed as trees instead of trees I think. Yeah. But the the in terms of like a shorter term thing, you know just being able once you upgrade a sub workflow just to see everything that's using it and you know sort of go through your workflows and you know just like upgrade all these workflows, I think would be a huge, a huge maintenance enhancement for something like the GP where you have a collection of workflows that are sharing a sub workflow and you want to upgrade them all at once. And the medium term that's somewhere in between there. And so that's, that's what the, that short term is the improved sub workflow maintenance user story here I think. But then the medium term would be like, you know, if you're using these workflows to cross different servers and stuff you when you import a workflow there, it should be able to like be able to remap workflows or and I think part of that would be like, oh this this lineage no longer matches this workflow right the sub workflow I'm going to take it in a different direction here. So you should be able to like clone it and and sort of go from there. I mean that's not possible in the UI today but there's a there's a wide variety of things we need to do to address this and I think there are open issues for most of them. But yeah, it is a it is a conversation we've been having. You know, it's great to hear it absolutely then, and then just from like, you know, from an annotation point so now that, you know, versions of workflows are now becoming will become more important how do you deal with annotating the difference between versions right are there change logs between, you know, the individual branches where the workflows change, could that be automated or because you know, like just having like, you know, some hash or whatever isn't very descriptive. You're absolutely correct there and again yeah and again there's a whole series of conversations we've had. I mean, I think part of the refactoring API and trying to get everything to go through a sort of specific API for updating workflows and sort of just sort of in the past what we did is we said, Okay, here's a whole new copy of the workflow. It's a new thing in the lineage, but with the refactoring API that we added a couple years ago, and sort of using that more and more. And so the idea is like each, each kind of change you make to a workflow should be an action. And then we record those actions and what sort of parameters those actions had, and then we could, we could do that we could do those out of I mean that's the whole point of that is to be able to trace things like that and generate change logs. There's still a lot of work to be done there but I think it's the direction we're definitely heading in. But you're absolutely right that the hash is not useful and neither I mean I guess what we use right now is just a linear set of numbers right that's not particularly useful. Yeah, no that sounds great I'm glad you're thinking about all this. Another thing that that that I forgot to mention was at the very least notifications on update for the sub workflows such that they that the updating of that sub workflow removed connections for example. I'm not aware that even if something is tracking if it's available to still be connected it can stay connected but if it's not, then you are aware that that something has changed within the the downstream workflows that might have a problem, which the editor now supports right so in the past we would just drop links and such but I mean, Marius's work is amazing this release. Yeah. Awesome. So we have a lot of new features here I'm wondering how we can best communicate those to users and workflow developers. There's a Marius is preparing a training at a tool with a new version of the workflow and we're planning to have a workflow development training in GCC as well to go over all these features. Awesome. I think that so plan you know paper is coming out this month. I think it's time to write a maybe workflow paper, because that's another way to do it. I mean the other thing is Marius just has a ton of energy he just talks to everyone all the time and he'll just explain everything to everybody individually. Just hand out his phone number. Yeah. Yes, Bjorn we need to go to conferences. We need to do some work close group up next we have goats. Okay. Hi everyone so so the main one of the main topic that we're working on right now is the organization of smorgasbord 2023, which is planned for the end of May. We see on the right we opened the registration not long ago and it's already exploding all the number from the previous years in blue is this year's registrations. And we're functioning this time with a set of module it so group of person each take a module or can be a more than one list per module. What we're doing is we're going over all the tutorial of this module that have videos. We have a table keeping track of who's reviewing what what state of the trial is in and the goal is to verify that the tutorial to date with the tools and all events for the technology available and verify that the video is also up to date with the different galaxy updates and to contact author and speakers from the video or the tutorial if there's need for modification and help them update all of this material for the training events. And the goal is to sort of reach out to potential trainer if some tutorial could be interesting to include in the event but don't have a video for it yet. So we still need to module it for proteomics and plan to ask him to be but all the other module have leads for them. So it's in the work. Next slide please. The next big event of course is GCC 2023 the registration is open as of February 2020. We have a paid dedicated page on the hub for it. And we have you, we are using event brief portal for the registration handling. The abstract submission is planned to open on February 24 and we have a COVID policy that is linked to the policy of the university on sites for the conference. Next slide please. So for the training part the session will be 2.5 hours we have five room available so five parallel tracks and three days of training plans. So the current working schedule looks like that we have admin we have Dave we have genomics miscellaneous and miscellaneous actually. Next slide please. Asunta is actually recruiting trainers, especially trainers that are local in case there's trouble for traveling for the people who are currently assigned to the each session. So if you have, if you can and want to us training session or if you have questions regarding the training, please contact Asunta. If you have any trainer for ecology, if we don't have, if we don't find trainer for this session, we will swap this topic for another one. Next slide please. Finally, the big project in the work is the Galaxy mentorship network. So it's a one year anniversary of this program. We have a pro result with it. One of the difficulty we have is we're receiving the application at any time and what we're doing is we're publishing them on HackMDIO, or we also try to publish them directly into but it's hard to keep mentor engaged to read all of them and to vote on what they're interested. And so we have trouble connecting mentee mentors. So what we want to do is we want to reorganize the way we're doing things. And while the application will keep being sent at any time, every two months we're going to process all the application we get from the last two months, go through them, select the one that seemed to have a relevant project for this program and publish all of them to the mentors, ask them to vote if they're interested, and if we have offered application by the end of this one week, try to contact mentor that we think might be appropriate for the mentees. So once we have a concrete new organization, we will contact the mentors and ask if they agree with that plan if they have suggestions. And once we have the agreement, we're going to ask mentees that already applied if they're still interested and to reapply and start with the consulates and to explain all of that we will publish a blog on the hub to explain to have a post mortem of this past year and what went wrong and what we're going to change to make it more efficient. And that's most of the program for the past and next quarter for the good group. On the smorgasbord thing, what are you guys using to what channels are using to advertise this and it's that the big spike happened a few days ago I guess in registrations is quite impressive. I'm not sure I will have to ask Elena. She's keeping track of the number. I'm not sure. But I mean is it all from Twitter or personal connections or past registrants. Do you know who's signed up. I don't know I think we have a lot of past registration of past participants. I think they advertise on the dot EU as well. But I don't know. Yeah, I'll ask. Why don't we advertise it on dot org. Yes. Can we make sure that's going to happen. Who's the good question to us for that who's a good person to us for that. Well, I guess if we have if you have, if you have the graphics on the text, then obviously it's just a playbook of data. We can always talk to me. Or if you just want it on the welcome. It's a hub edit. Yeah, but you'd want it. You wouldn't want to displace the GCC 2023 stuff. I guess below that. I mean we can also like alternate right I mean it doesn't have like we can put a week of smorgasbord than a week of GCC GCC registration open has been on for a few days, maybe next week we put the GCC registration open and then later we go back with the GCC abstracts opening and then again smorgasbord so that the contents not as static. We can also on the on the middle pains on the minimum on the local page we can downsize this Ukrainian banner, I'll guess I'll work on that, because we can, it can be smaller now. I'll work with Bjorn on getting smaller so we have more space. I mean get into one year anniversary should we maybe make it bigger for a few days. I don't know if it's a good anniversary. Bring more attention to it. How late is GCC registration open just just thinking about advertising for that we'd probably want to do that at smorgasbord somehow just since you were asking about places they were advertising. It's open Monday, and it's open throughout, you know, from now until the conference start is now we're in early and I forget the exact date but in like mid. I think early May or something we go to regular and then mid June is is late, I think I forget dates exactly but the abstracts close April 2. And that, you know, we need to sort of advertise ASAP early on so people have a month or so to, you know, prepare the abstract it's not enough to just do it two days before. Yeah, for NS questions so we also advertised in our local research networks. And I discussed with Helena so I'm not sure if she did it already, but in tears. When people register training event there is a checkbox where we ask if we can contact them. We can keep the email address to contact them. So I discussed with Helena to kind of send that also to the people that have used tears in the past. So if you do that similar on org or on the Australian and tears, you could also consider that. Any other comments for the goods group. Next, the assistance. We can do you need says that it's yes. Can you hear me. Yes. Um, so the user server has been testing salary in production and figure out sort of the best way to run that what some of the pitfalls are. There is a in person admin workshop in the planning stages for April. There has been some interest expressed in that it's going to be in Europe looks like Belgium. And so so that's not planning stage nothing nailed down yet as far as I know. The Australian server has been running TPV of course for a year I think plus now. And you started using it in October has been expanding our deployment and use galaxy org is finally running it in production as well with some just a couple of tools so far I've used it on the test server for for more tools for a while but we are finally running it in production on use galaxy org and for a couple of the Penn State instances that that I help manage. And that's all going very well like there's a lot of stuff that's being enabled by this really really great. The EU has, I guess, had disabled GPU notebooks for a while and now re enabled them. Although you have to get specific access for this for for sort of obvious reasons why they're not just open up to the world. And the group very much wants to be supporting the work for the IDC and especially with john's work on on the bundles now which are designed to allow. So that the stumbling block for the IDC so far has been that there are some compute intensive steps that are not good fits for things like GitHub actions to build indexes in because it's just not enough compute time or resources. So, what john worked on was the ability to run data managers, essentially, on the public servers, bundle up their, their outputs, and then we can unpackage them into whatever we want which, and then, and then share that out to the world via the IDC. So going forward we want to do more to try and collect our annoyances as we're calling them I guess that the basically the things that we spend a lot of time working on, working around in deploying galaxy that can be pushed upstream and really is more of a galaxy problem that should be fixed there rather than in the deployment. But as you can see our time has been really good for the main public servers. If you had seen this graph before it was not always so great. So this is I think a big accomplishment, not just for the people working on the infrastructure but for everyone developing galaxy. It's, it's really become a great developing and testing sort of become a great, great service. That's it for us. Is it possible to enable GPU for notebooks. We can, we'd want to do a similar type of thing, you know limiting who has access to it. But it's, we have GPU instances of jet stream that we can get access to, or we do. And we could run them there for sure. Maybe this is more of a back end question. Our, our, where we stand with like pulsar and sort of, you know, kind of remote execution remote data. Is that has that been propagated to these systems. Yeah, so, so I think I don't know what the current status. I don't think there's anybody from Australia on the call but they run the overwhelming majority of their jobs and pulsar. And interestingly, I mean, we have, I think it was 40% of our multi core jobs on use galaxy.org went through pulsar in previous years and that's increasing as well. We are. Yeah, there's currently no major issues. There are some things that are still being worked on but at this point, I would say pulsar is pretty mature, and our ability to run jobs and is pretty mature and TPV is, is sort of the key to utilizing more and more pulsar resources going forward. And then like remote data, maybe again, maybe this is more back end but what about like remote data or is that. Yeah, I think that, yeah, I think that's more of a back end question currently. I don't know if john, are you going to talk about that at all in the back end. Remote data means so many different things to so many different people that I give you an answer that it's either impossible or it's completely possible today, and I would kind of want to sit down and talk about the specific, you know, use case. There's certainly, I will talk. Yeah, I mean, probably, we could probably move that discussion to the back end but there have been advancements for sure. On the deployment side I guess that EU is running extended metadata and so extended metadata is this concept of like, we can keep the data that tools are producing off of galaxy servers. By, by doing all the work that used to happen on the galaxy server after a job was done and doing all of that as part of the job as part of the compute on the compute. We've made tremendous progress over the last six months in that, in that direction on the development side pulsar supports more forms of doing that more different kinds of configurations, like if you're running containers and, and how that set up we support our, not TRS, TES, GA4, GHTS can can do that now. And then, yeah, and then I think my understanding is that EU is exploring with the deploying a lot of those so I think we're making good progress. And, you know, this is outstanding uptime. I'm wondering, you know, if you, what you, and may you already shared credit with with basically the whole team but are there any been, have you noticed any sort of key advances that have improved the uptime. And I guess the flip side to it is, you know, I guess there have been a handful of off days. Are there any, any things that we should be looking towards to pushing it even further or just making sure things are even more robust. Yeah, I know I, yeah, I think the testing has been a big part of this and, and certainly what John was, John D was talking about earlier with figuring out when database issues are going to be a problem has been a big key in the past. So I think a lot of work has been putting into making sure that our queries and, and everything scales up to the level of a large production server. And so the more of that that is done in the future which testing group talked about quite a bit. And I think it's beneficial. I think that it was a little bit rocky for a time after the unicorn switch as we tried to figure out, you know, how best to have work on have zero downtime restarts, the amount of workers that we could run the amount of memory that they would consume which was causing some crashes for a while. And all of that stuff has sort of been worked out through the, the, the tenure of the 2305 release and now with 23.0. Things are in a pretty good state, I think. You know, if next we have the back end working group. Um, so sort of like transitioning quite nicely from this conversation of like remote data. One of our key things that we're done, we've done is I mean this PR hasn't been merged yet but it's approved and it's green and it's ready to go. I think it's a huge PR that revamps a lot of object store abstractions. And so this isn't just a back end thing it's in the UI it's, it's, it's in the API, it's every place but it allows when configured you could have different object stores and it allows user selection of them allows object stores to be private allows multiple different kinds of quotas, and then it allows the admins to send configure a bunch of information badges there's a whole visual language for communicating information about the object stores configuration in a very user friendly way. And that's all in there and I think all of that is really an important stepping stone for being able to do things like bring your bring your own date bring your own storage to galaxy. I mean, I might argue that galaxy has been able to support that for a long time, but I think the set of abstractions and these UI elements really make it a lot more tenable to have a nice user story around it. And so that's, you know, that's the next step I mean after I think remote data is sort of, you know, what what remote data and how and, and I think that PR is going to tackle a lot of that. In addition to that we did workflow conditionals that we talked a lot about was my work. It's amazing. We did the data bundles that Nate explained and explained why they're important. David did a great job with the auto crate and invocation imports and exports. In the latest release of galaxy. We've done a bunch of Paul star maintenance a bunch of the upgrades I mentioned for handling remote data better and just those pain points that Nate had mentioned I think we're slowly solving them sometimes we need a little prodding, but I am impressed with Paul star's ongoing. A development and sort of maintenance it's great to hear that the somebody servers are running it and it's working so well. I added a nice piece of documentation about running containers and Paul star that that looks pretty cool. I've added a 4g h d r s support, which is again, another piece of that remote data picture. A lot of people are expressing remote data. 4g h is expressed or is pushing people towards expressing it as d r s your eyes I don't know. They're actually useful or used, but we definitely have support I put in some initial support and then no one. We just merged PR from one this week that was amazing. It was such a great hardening of that and like, you know, d r s is a pretty open spec and there's a lot of different ways to implement it and no one did a great job of like just making it a lot more useful I think in a lot. A lot better tested is just great. We have a feature complete prototype for an extra generation tool shed so it's not feature parody, there's a bunch of stuff that we don't want to do the tool shed does too much stuff we don't want to do all of that we have a couple key user stories. And so I opened a PR for that. And then w did a great job with remote test data for tools. Yeah, again, giving back to these big, big test data is a question we had earlier. And this is a support and heading in that direction and then we've sort of made, I think, really great and steady progress on a lot of our long term goals. So during the last hackathon, we did a bunch of work with Python typing fast API. So doing those things async becomes easier. I think a lot of the back end will be improved as we have a more structured way to think about the way tools have their state. And so that's been an ongoing project. And that sort of feeds into CWL, and then john D keeps making tremendous progress on how our data modeling and SQL alchemy 2.0. Can we do next slide. Oh, no, there we go. So that's what we've done our planning. What we planned have done before the GCC 2023. This was mentioned in the workflow talk but improved support for job caching. I think this is Mario says big push for the next couple months so I'm sure whatever he produces is going to be amazing because it always is. It's going to help training that's going to help VGP that's going to help workflow developers, it's going to help everybody it's, it's going to be great. We SQL 2.0, we'd like to make, you know, like to make the switch, I think, pretty soon here, and then make, then start working on this other project of removing the objects from the database cleaning up the database, and a more permanent way for performance reasons. And then we'd like to merge in and harden this tool shed replacement that that's open. And then coming back to like user defined object stores and remote data, I know that's a big push on the roadmap, sort of identifying the limitations. So I'm going to go back to these user stories because, again, I can either say it's completely doable and has been doable for years or it's completely impossible, depending on, you know, what the user story is. And so, scoping out a couple of those and addressing the pain points and addressing the limitations. And that's going to be stuff like tying our private store our private key store to the object store and serializing that as part of jobs and working through those details. But I think we're in a really good spot and we made a tremendous amount of progress and I expected to continue and then developing a plan to get salary on on more of our use galaxy star resources, which Nate had mentioned. I think that a lot of the work that we have to do is working with other work groups. So, this object store selection PR that I started by talking about is great but if no servers adopt it. It's kind of, it's not necessarily a useful thing. And so one of the key first use cases is it makes scratch storage really quite easy. You know you just define two stores and one of them you clean every two weeks, and just let users pick a scratch store. But it would be also good thinking about main, if maybe I rods testing could be done this way. You know if you go in the UI and you go into use a preference and say I want to use I rods, and then you run some workflows and you see how far you get with it and then when it breaks you switch back, and then tell us and we'll we'll work on, you know fixing those those bugs. So things like that would be really cool and they would be really easy to do, you know it's just one XML file after, after the next release. We continue to work with the systems a group on the IDC and getting those data bundles and just sort of allowing community genome annotation. Well that's not the right word, you know, creation of genome indices by the community. And then working on with the workflows group. I mean we've talked about all of these already, but like the remote resources and GitHub actions. And then working with the systems group to get the new toolshed deployed once that's merged, and then you're working with the UI UX group on developing API is for installing visualizations out past the GCC. You know, making more progress towards a secure user defined object store, and then this modernizing galaxy tool state project that I think is going to just revolutionize everything about the back end, in terms of like better tool execution workflow extraction to make this toolshed impossible are better, more useful it's going to make tooling around tools better and it's going to allow CWL. Yeah, and that's that's the back end. Any questions. Well, I mean it's just amazing progress across the board. I guess at the top of the list, you know the, I think you said the object store abstractions are like, you know, nearing the finish line here. When, and how can we best learn about those. Well the PR, the PR link. I mean, the PR has screenshots and screenshots are a little bit old but all of the concepts are the same. And then, you know, hopefully we merge it today, or this week, and then the UI team takes, and you know takes a machete to it and cleans it up, and then a month from now it's in a really good state and then with the next release. We can just be part of the release and hopefully we can work with Nate or whoever ahead of the release to have to have a plan for how like, I think scratch storage is the really cool thing that would just be right out of the box. So like, if the next release includes, it's such a good question because I don't think about these things right. If the next release, like as part of the release process we implement scratch storage, you know from day one, that would be really great. And one of the features in there is that like, if you're running a workflow after the PR, if you have different object stores set up like a main object store and a scratch object store. When you're running a workflow you could send, and this is implemented today it's in the UI all the pieces are there. You can send your outputs to your regular object store and you can send all the intermediate data to your scratch object store. And so like, you know this is a huge issue that's been a big issue for a long time in Galaxy and it could just be something you could do with the next release of Galaxy and hopefully, yeah, maybe hopefully we can get this on main relatively quickly and then people can see it. But it sounds like things are on track to have it available before GCC even. Yeah, absolutely. I mean it's something that I presented at the last GCC so if you find my talk from the last GCC I did say like oh this, this is done. And then it, then it stagnated for several months because there was some like corner cases about how that workflow differentiation was working that I was like, I didn't have the spoons for and I got distracted by a million other things but it was important and I should have just like came home from the GCC and done it but um, yeah. It's pretty good though. I remember very clearly your talk in Minnesota so I guess maybe this year I look forward to maybe seeing a live demo even. Sure. Yeah, first time for everything. Dan, did you want to say something, which is going to say like that kind of feature you're asking how we should help people should discover and, you know, see these things. That kind of stuff would be great for like a hub blog post or you know that kind of, you know, high level introduction this is how you can use it today, whatever. Yeah, I started doing that for a lot of the new features coming out of Bjorn's group and I think it's really nice. So there was an attempt to inject a little bit more people into this. There is this person from can say this Burak did it work or not. Um, they came to our last backend group. We gave them a plan, and I don't know if they're working on it or not but we gave them a very nice project again along similar lines about workflow scheduling. I haven't seen any PRs around that but it's only been a week or two. But, you know, the idea there is is how can you get a workflow to like schedule all together. And there's lots of different ways to approach that but we gave a sort of nice approach that would pair well with any of this other stuff we're working on so it's kind of. It's an important project that would that like the idea behind the approach I will like I gave him to work on is like, you know, package up the whole workflow. And so we mentioned our own crates like using all of that work, you know package up the workflow ship it to a remote galaxy server, run it on what would be to that galaxy server local resources but maybe it's AWS configured with AWS batch and you know you don't need to you don't need Paul sorry you don't need remote storage or anything because it's all just running on on either that Kubernetes cluster or that AWS cluster or whatever, and then just pull the results back and pull all the metadata back at the end. So it pairs well with everything else we're working on but it's also kind of independent. And I yeah it's only been a week or two so I don't know, but if that if he doesn't make progress on it, it's something that we would probably as the backing group work on at some point because I think it's important. If you can ping them because they have tendency to sit on this PRs forever to the point that they diverge to an unmergeable state. So, or I can ping them but if you can gently ping them and then I can talk to them. Well, I don't think he's opened up PR that's what I'm saying I think he's still working on looking at scoping out the work he wants. He probably works on his local fork. Okay. We'll talk offline about this. Okay. Thanks. On a different topic, could you say a little bit more about the dirt support. And, like, like you said, you know that's a that's a ginormous spec and mean a lot of different things. What what is supported now and what will be available in the future. So, both client and server support the, I think probably the client support is a little bit more interesting. And so the idea here is when you're uploading to galaxy, you can give it URLs. And when you upload a URL to galaxy, you can either say, pull the data down now, or pull it down as needed. And pulling the data down as needed is is kind of again something someone might think of as remote data right. And so the idea is just someone uploads a collection of DRS URIs maybe maybe they put in a collection or maybe it's an individual data set but all of the anywhere you can paste the URL into the client so in the rule builder or in just the basic upload form or in the collection creation upload form. So those can be DRS URIs currently. And that could talk to like a gen three or terror data repo. Presumably. Okay, I think that someone needs to sit down and work through the how those are resolved and make sure the tokens are like the security stuff. But I think that one stuff really. And maybe no one would be even be a better person to answer this question but I mean, but that's the idea. Yeah, and so. Yeah, as in the server support is a little more obvious just like the API can generate DRS URIs for the data in galaxy and right now I think it's just public data, because we don't really have a token mechanism or anything. But so if you have a public data set which a lot of data sets in galaxy are. You know, I think most people, a lot of users just turn on public access to all their data sets. And so then galaxy can just produce DRS URIs and then any other resource that consumes DRS can and pull down data sets from galaxy using that API endpoint. Is that automatic or do you have to kind of publish a data set through with a with a DERS URI. The data set if it is public it has a DRS it has a DRS URI. I mean there's a lot of interest in the NIH clouds to kind of use DRS URIs as sort of the preferred transfer way to reference data. So, you know, being able to claim you know being able to offer that now on main and you and even beyond I think will be valuable. If you want to kind of aggregate data for across multiple sites so that's an awesome new feature. Yeah, I mean. I think there's like a, I think it's, I haven't seen like compelling use cases of like, here's a library that uses it to do something fantastic or here's a, but maybe maybe I'm missing something but as I was developing this it really felt like, you know, it felt like it was a lot of talk and not a lot of like they're there, but whenever the there is there, I think we'll be there also. I mean it's sort of just developing now but I think moving forward that'll be important. Yeah, it's good to be a part of the conversation and to have like explore the ideas and Yep. And then my last question, and I know I don't really understand all the backstory here but it sounds like this tool shed 2.0 will be a major revamp. You know, maybe try to give more focus to it because you kind of unpack that a little bit about, you know, what are some of the headline results with this. I mean, I am. Yeah, well there's there's there's a lot of politics but like, I think the, the thing is that the, the tool shed has the, the, the number of things that the tool shed attempted to do 10 years ago, when we first published the first tool shed paper. It was tremendous right it attempted to be a source code repository right attempted to compete with GitHub. It attempted to be a package manager right, you know, but we use GitHub now we use bio condo now. And so we don't want to do any of that stuff but we have 10s of 1000s of lines of code in Mako in Python backend, etc. That's just like, you know, it's just bit rot it's just stuff we don't use so we use just like we do we use the tool shed, mostly just through API's and mostly just a couple of the API's. A couple of not fair, you know, 10 of the API's 10 of the 20 API's and, you know, and there's this vast UI that we just, we don't want to use any of that we don't want to support any of that. It's become dated it's become broken, because people don't look at it. And so this this new tool shed is very lightweight. It's just like all of the, everything to do with how you upload things to the tool shed has been removed, because we people just use planimo right people use planimo and get hub actions. That's what we want people to do. And so this new tool shed has information, it links out to resources about how to set those things up. But it doesn't, it doesn't implement any of that functionality right like it just expects you to publish things through the API's. And then like all of the different ways to like search the tool shed and stuff like all of that has been replaced, you basically only use like the galaxy API's to sort of find tools. And then like installation is ephemeris and ansible, these things didn't exist in the past, but now they're sort of front and center in the tool shed so if you go to a repository, instead of like, you know, all the things that you could do before it's like here's how you would install it with ephemeris So it's, it's a very slim down, and then like, you know, the old tool shed was Mako, you know, basically Python two that have been like minimally ported to Python three, and then, you know, some jQuery stuff but not a lot of JavaScript this new one is a single page application in view view built with the beat uses the sort of material design and quasar so it's like, it's like frameworks that are sort of meant for much smaller applications. Yeah, and so the idea here is just that we have a small, I almost the problem with publicizing it is like I kind of think of it as like a backend thing you don't really need to use and so like the point is just to have something really small, I think that that just is supporting the use cases that we are currently using. But there's some new stuff there, for instance, it does support the GA4GHTRS API. So we are sort of aligning with, in addition to aligning with like the fact that the world has moved beyond, you know, hosting your own source code and custom package managers but we're also using, you know, common APIs and stuff now also. But, and I guess what do you see is sort of, you know, the world. You know, using sort of toolshed 2.0 versus Dockstore or other services that that work with some of those API is like, I guess I can see a need where we'd always want our own work be stable and some of the APIs that we need but I guess I'm wondering is there is there opportunities to, I don't know, distinguish ourselves or maybe integrate more tightly or leverage in some way I guess I'm just interested in your thoughts. Um, my, uh, let me see. So, um, my, my rewriting of the toolshed was largely defensive. Like I didn't want, I didn't want a project where we like switch to something else and everything breaks. So this is a version of the toolshed where even though all the code has been replaced at all the layers, it's like, perfectly, I mean not perfect. It's well tested and it's backward compatible. So now if someone did have some exciting view for how one would use a toolshed 2.0, well they can now do that in view and in fast API and with typing and you know, you know, you end with TRS, you know, like, it's a setup for someone to have some great ideas but really itself is attempting to sort of, um, it's not, it's not bringing any cool ideas to the table. It's just, it's meant as like, this is a piece of back end infrastructure that we mostly don't want people to touch. And if they do touch it, it shouldn't be completely broken and ugly like the current toolshed. So it's, it's, it looks better and it looks more modern, and it's easier, much easier to develop against presumably. Um, but yeah, it doesn't have any big ideas. So, um, yeah. I guess the big idea is just to be super stable, modern, reliable. One could have, one could imagine so I, I would love us, oh my laptop so to die. You know, aggregating how tools are used linking out to workflows linking out to training materials linking out to test results for public service this is the direction I would take it is sort of data integration and API to support that and greater TRS support but I mean, I have no issue with doc store hosting galaxy tools and I think Marius is really interested in that project, but I also don't think that their API is right now would support the way we use tools. So, like, I think it would be kind of, you know, it'd be a check mark rather than, you know, a functional thing that we could use. Got it. Marius and I are actually thinking about talking to doc store and perhaps writing a grant together. But in order to do that, that's basically we need to flesh out what the main points are so I think that's the right discussion. So I think they'll be interested in working with us. So, yeah, and we have a great relationship with them right like things go well I mean I it would be it'd be great to see to see what that looks like. And this this this isn't designed like this is designed to like coexist with that right because we're using a similar API now or we're at least implementing a similar API. It's not it's not meant to compete with that it's meant to just do what we were currently doing a little better. Thank you very much john already. I think our last group is you are you guys. All right, I hope I make this through this without a coughing fit I managed to pick up whatever my kids had earlier this week so I feel terrible. Um, all right so stuff we got done. The brand new workflow editor I don't need to go into that since we've talked about it twice already. It's amazing. We have related data set filter it's a way to in a in a in a history you can click on a data set show the inputs and outputs of that. We wanted to explore this as a precursor to sort of navigating a history as a graph. I'll talk more about that later. The multi view history has been completely rewritten. And it's, it's working really nicely it's all in modern view. We talked about the iPhone. Talked about the arrow crate export UI it's a single export UI for multiple types of exports. That's new tool search got a complete overhaul so now in the tool panel when you type you immediately filter the clients, you filter an object on the client side based on your input. So it's fast filtering without even hitting the server to find tools and this is the most common case of tool searching you know you type random for random lines or something like that and it comes up immediately without even talking to the server. So that's nice but then you can also click the advanced search. So it takes over the middle panel and shows rich results it shows more about the tools that you're finding and allows you to search on more metrics. So the left side pops down and you can see more options it's it's really good. The tool form itself saw a bunch of usability improvements. It looks nicer. It's more usable because things are where you expect them to be in more places. So we are open right now that integrates GTN tutorials showing, you know like okay so I'm trying to use bow tie. Here's a list of 1010 tutorials that use this tool if you want more information so you can click through and go back. Go to the GTN and see, see those in action. There's a the tabular data set display was completely rewritten so it's not backbone anymore. There's no Mako for it. It's all of you. It was kind of an exploration for a next generation of visualization. And the first step of that I guess is done we need to figure out the packaging and integration still but that's, you know, made a bunch of progress there. So the display issues were hopefully mostly resolved. They're nice now you can actually see an it running in the history you can know the status of it, you can click on the eyeball and actually get the it instead of some, you know, you know, not a data set view. There's also a store there showing the status that we can leverage in other contexts that I'll talk about moving forward. There's a huge focus on accessibility testing. We had a bunch of community members contribute feedback and you know their organizations ran reports and we ran reports and we addressed a ton of little small things. The tags editor got a complete rewrite. So that's all tab navigable now. The history is tab not, you know, keyboard navigable. So you can, if you're impaired or visually impaired or whatever you can navigate just by just by sound. Made a lot of progress there we actually we added linting to. So all view it's only for view components, but all view components now will run a recommended set of tests against components you're writing and say like hey you need an area tag here or hey this is an inappropriate use of an area tag, that kind of thing. So that will continue to get better over time and we'll backslide client modernization in general had a ton of progress. So now we're using view 2.7 and all new components are written in the composition API using script setup mostly. So we we swapped from view X to Pena. And so all the new stores are in Pena. There are a couple of remaining stores in view X but those will get converted along the way. The way from the sort of the provide nested provider approach to composables in the composition API and those are working really well. They're a pleasure to use in comparison to the old stuff. And the client is swapping over to TypeScript so whenever folks are writing new code if you can TypeScript composition API, and it's going to be nice modern code and it's much more fun to work with personally. And it's nicer better code in the long run. So that that's that was it. That's a nice change. The top level application there's one top level view app uses view router from the throughout the application for routing so there's no more backbone routing there. So we also this is actually had a schedule. There's a TypeScript client for the galaxy API. That's auto generated based on fast API what fast API presents we auto generate a schema. And then you can just use that in the client you import the schema, you say, I need to touch these types of objects here are your options. It's got fantastic since it's TypeScript in your editor it's got really nice type tab completion. You can see what properties this thing this method takes it's it's amazing. And I think that's going to be a game changer for our use of means, you can kind of think about it like the reason we wrote bio blend was to see have a nice set of objects to work with. Now we have the same thing in the client in JavaScript. And we have a pre built production client published. It is opt in for now but if you set a flag. The expectation is that you would you would want to do this intentionally and like, if you're using plenimo test which spits spins up a server it doesn't need to build a client right you could just install and use a client or if you have a playbook that runs a server that never diverges from, you know, our production standard, you could just install and use the client that we have published. So yeah, that's that's done it's opt in for now and it will get updated. For now, manually at point releases, but it's it's on npm. Next slide. So the stuff we still had to do. We have a half view of workflow invocation this got touched on earlier but basically we want to take the nice beautiful new workflow editor view that we have and reuse it to show an invocation as it's running so you can click on an invocation instead of getting a box with like five out of 15 steps or whatever an option would be show me this thing that looks like it looks like in the workflow editor with the state at each box and you could watch a workflow run or inspect a work past workflow run in a in a graph display. So that will we'll get done for GCC. The notification framework, we had an outreach you student last summer, take a really good start at this. Both admins and people have asked for for a long time. But having a framework in galaxy to where, you know, a user could request a tool installation, and it happens through galaxy or an admin could say hey there's going to be a downtime, whatever broadcast notifications. That kind of thing you could have one of the other option, one of the other things we want to build in is a way for users to acknowledge notifications so if you needed to send something out requiring a user. You know, say, yes, I agree to this, whatever. That would be another use of it. The next point view is this all framework in the primary app. That's probably realistically a stretch for GCC but we want to keep pushing for it. It's the, you know, the tail end of a long, long, long modernization effort. The big things that are still outstanding are the some various grids, the upload component and a few form elements for. We had this notion of, you know, archived or frozen histories that we wanted to make happen for GCC, we're going to have a clear path to, you know, archive and download that archive, and then purge history from the server. It's going to be taken up your space. It'll just to have a clear use user path to get that action done. We're not writing off sort of the concept of frozen histories, but there's so much more involved, especially on the back end with making that happen, but that's not on the table for GCC. We want to make nice progress towards, you know, a user, especially building on the work we did for the RO crate export and that kind of thing. You could seamlessly export a history, save your tar ball on S3 or whatever, and then reimport it later. So that'll be the sort of the first passive this this thing. We want more accessibility testing. So we have the linting right now but nothing is accessibility testing is hard. The other things we find are, you know, I tried to talk to Wendy about this specifically since she was super interested in it and she said when they do accessibility testing they run an external test suite. But then the other thing they had they do is they actually have a team of folks like click through a series of steps in a tutorial and at each step they look at the HTML, the elements on the page and they see what they can touch and what they can't touch and like how it works. So it's not totally automatable, but we can do a lot more in an automated fashion. The low hanging fruit here that I want to implement is a playwright drip. So we have tours that exercise a whole lot of galaxy stuff. And that outline common paths through the interface that users take. I want to use those tours. I shouldn't say we want to use those tours and basically at each step of the tour, use acts to automatically detect where there are violations. It's really nice. I use it in my browser. There's a browser plugin. That's how we find a lot of our usability violations. It'll check for WCAG, you know, 2.1 or double layer triple layer whatever you tell it to. But what we could do is have these, or what we are going to do is have these playwright driven tour tests that step through the entire tour and at each step go, Hey, is anything in violation. So you'd see the upload box you'd see the workflow run form you'd see you know all the stuff. And that will give us some minimum bar of making sure we're not violating accessibility guidelines. Visualization plugin framework enhancements. So right now the all the visualization, the visualization installation and build process is all wrapped up with the client build. We need to totally separate that build out APIs. This got mentioned in the back end discussion. But we need to build an API for admins to install manage the registry rebuild visualizations and stage them. And then IGV.js will replace trackster is the sort of the main track browser. But we will, you know, as soon as IGV is integrated, it'll be fairly trivial to add jbrows or whatever your next favorite track browser is the data set view. So right now when you open a data set card you have like 15 different little buttons that you can click on. What we want with the data set view is to have a comprehensive component that takes over the middle panel that then has a tabbed interface showing the various things you can do with the data set. And this you can imagine this being reused anywhere. But that'll be a nice usability week to get get some of the noise out of the history in terms of all these little buttons and actions and things. So that gives us more real estate to, you know, a central pane where you can kind of see a data set in a single view. A really visible UI change that we're working on is an activity bar, kind of like in VS code where along the left side you have your little sort of the different activities like testing code get whatever. We want that for Galaxy so you'd have your, you know, workflows analysis. That kind of stuff all in the left panel and that gives us a lot more real estate. Historically we've shoved some of this stuff up into the massed head, but it's just not appropriate when you have like, maybe you might have multiple it's running or something you might want to see those in the in this left activity bar, or you might want to pin things to the activity bar like, you know, specific workflows if you have a user that only uses like two workflows and that's all they ever need to touch. That's the activity bar and then that's their things right gives us another place to really customize the interface and make it specific to use cases and users. The rule builder. It's super powerful. We want to take a pass at just overhauling it to maybe make it take advantage of more screen real estate. I don't know that it needs to be in a modal. Make it, you know, basically just take a pass on it overhauling the user interface. Think about usability concerns and just refactor it a bit. And then lastly for GCC. We implemented the virtual scroller in the history as a way to get the history working the history 2.0. And it did that and it works really well. There were reasons we were trying to build the virtual instead of virtual scroller virtual scroll bar. There's a distinct difference between those two things. So things like jumping to a bookmark jumping to a time point things like that are very hard with just a virtual scroller. So we need to overhaul the architecture of how how the scroller works. There are also several outstanding bugs related to this that we've sort of short term think we fixed, but you know needs needs a bit more work but that's slated for for being done before GCC as well. And then 2023, hopefully, but not before GCC. Simplified execution interfaces and entry points for both tools and workflows. There's a good example of this on the one of the HackMD documents for, you know, small interface it's got three options you can pick and then drag in a data set that kind of thing. And that would apply for tools or workflows. And then lastly, with the visualizations, having them bundled separately built separately and plugged in. It would be really nice to build them into web components that can just be in a embedded wherever, whether they're on the hub or somewhere else that then you know render data from Galaxy in a live graph display or whatever. That's this parameterized taking in properties. That's something on the wish list. I think that's it. Well, I should say that's not it. There's a lot more stuff but these are the highlights. Well, it's just incredible progress is made there. It feels like I don't know every pixel of the UI has been rethought out redesigned enhanced in some way. I guess along those lines like I mean I guess I guess it feels like you know you're no shortage of things to work on. If you imagine, I don't know, a five year or 10 year stance. Are there any kind of like major initiatives that you that you're particularly interested in, you know, that would that you know where we want to be in many years time. That's a big question on a slightly shorter time scale, something that I want to do to avoid us getting stuck in sort of the the bootstrap. We've used historically use like bootstrap view directly right. We really need to consider building an interface layer and having instead of using B dash tab right bootstrap tabs. We really need to use like G dash tab Galaxy tabs that then is an interface to bootstrap tabs and we need to insulate the application from any of these third party libraries that that, you know, get locked in, and then die. Like might be the case with bootstrap view. It seemed like for a while it was getting resurrected and then over the past week exactly. There's been some news that might not be. So anyway, moving forward we really need to build in layers in the application that isolate us from those things where you have a single point of control you can change out the bootstrap view button for, you know, a beautify button or something else. It's not fun engineering work but it's something that I think we really need to pay attention to over the next several years. Swapping to view three is a big thing. I were using view to seven now which gives us most of what we wanted to view three in terms of composition API and some of the bells and whistles. So it's less of a dire concern I guess. But it sounds like the goal there is sort of separate out the sort of application logic from whatever third party library we happen to be using at any time to exactly. Yeah. And then as a sort of follow question I mean you've you went through I don't even know 500 new features here. How do we make the users aware of all these new capabilities. There's one I'm torn on right so the user facing release notes do a lot of heavy lifting. We're like the highlight stuff blog posts for things like the accessibility work. That's fantastic. I think we should use those. Those efforts as a way to you know highlight what we're doing and you know, get a blog post out there it's nice and draws attention and that kind of thing. So you need to do better in the application itself. I think showing users how things work and how they have changed. So maybe that's an expansion of and a better integration of tours or something like that to walk people through new features. The blog posts have been a really nice change though in terms of things we can do right now. That's been really nice. Yeah, back at the, that meeting the plant animal genomes conference back in January, we're talking about like the VGP workflows. And there's like, you know, trying to teach this to new users there's like several layers to it they're sort of like the broad concepts, introducing you know the kind of the science. There is a certain level of like click here do this click here do that. And we kind of need to do both right to kind of explain the concepts and sort of drill down on the specific sort of operations the user needs to do. And I don't know if I have any sort of brilliant I definitely don't have any brilliant ideas here but I'd be interested to hear if you had ideas about how to make, you know those sort of nuts and bolts operations just. I mean, we strive for it always to be self documenting but the UI is very sophisticated has tons of capability. Yeah I mean to address this is a project we talked about very recently that we're trying to figure out how to get done is application like glossary right some sort of toggle or interaction where each component is decorated with particular things that have like extra help, something that when you toggle this thing on certain things glow and then you can click on and go, what is our NAC and then you click on and it brings up a little glossary page for our. That was a terrible example probably but you know what I mean, right, a glossary type functionality built into the application that could then, you know, link to GTNs. What do they call them the like the mini micro tutorials or whatever whatever those are. They could link to that content, it could have shorter form content, but just sort of an integration of all of the content that we have all over the place in from the application itself I think would get that push that a lot, a lot forward. Okay. Question about this graph your workflow invocation so that basically going to be the workflow of you where you're going to color nodes which as they progress. Yep, you'll see it as they progress and once it's done, or I mean as it's going to you'd be able to click on a different note and see the inputs outputs like things like that in the right panel and explore what an invocation did. Something went wrong. I guess that kind of stuff. Sounds often like just the graph you because that's it. Yep. With benefits. I mean there's just done. Yeah, so that would be so is graph you're going to be done first. Yeah, so the graph view because of the data required to show a graph view of a history is much much harder with a workflow invocation we already have the structure from the invocation object itself. So this is actually a, a next, not a first step, so we've been working on it for a while but a next step towards getting that full history graph view done. I don't know how it would show some work flows but I guess maybe let's not go there yet. Anyway, this is this is overwhelming. So, in general, the progress of the UI group has been overwhelming so things for that. I don't know if I can sort of say that this is, it's definitely the best work group meeting we ever had. So, thank you. Yeah, I mean it feels like things are, you know, pretty well organized it feels like the groups have a lot of, I don't know leadership forethought engagement. It seems like the groups are working together. You know, several presentations calling out the work and other work groups. Yeah, awesome work everyone I'm just like totally blown away by all the updates there and you know some of us will be getting together in just a couple weeks here at Hopkins. You know, will be plenty lots of time to kind of talk shop. We have a couple fun events scheduled we're all going to go into the lab to do some DNA work and some social events. And then of course we're all super excited for everyone to come together in July down under down in person. Thanks everybody thanks to all the working group leads for pulling together these updates I appreciate all of that and all that you do to pull the team together. Appreciate your time everyone. Thanks. Have a good afternoon.