 So thanks everybody again for joining our working group progress meeting today. We're going to hear some updates from all the working groups. So, if everyone can kind of within maybe five minutes to share briefly, some of the things you all have been working on, and then maybe making some progress towards goals for GCC 2023. And also, we'd love to hear about blockers and other things that are coming up. Ways that we can help move those activities along. And so before we get into that. I guess, maybe Anton, would you be interested in covering some of the roadmap review. If not, maybe we can do that. Well, the roadmap is not a secret document. It's a known document. So what I would do is I would actually, I think, so the PI's plan with this will be to listen to the working groups. And then we go back and next week, we see what we need to add to the roadmap. And then we'll discuss this with working groups again. Okay. So, in the interest of time, I think we should just start with working groups. Okay. Thanks for clarifying that. All right, so let's take it off with the UX working group. Okay. Next. Okay. All right, so the way I broke this up, I didn't know exactly how we wanted to do it, but basically I tried to put stuff in two milestones 23.1 and 23.2 being the GCC release. What we had, like our mandate, I guess from the two year roadmap was is up at the top right, it's, you know, usability fixes that can be done quickly. Obviously getting the new history and was the big push for the last discussion like this. And then planning ahead to the hierarchical view where the sort of the three big things that we, we had on the roadmap. So with that in context, I'll jump in for 23.1. We'll have a brand new workflow editor. I'm in conjunction with the workflows groups. I've tried to like note when it's a collaborative effort to the right here. We want to have this, it's going to be a, it will be able to render a static view of a workflow, and we want to publish it as a separate package. So for example, you could import it in the hub and show a rendering of the workflow in writing the galaxy hub that kind of thing. So that'll be ready for 23.1. So the graph view and we'll talk about this more in the context of 23.2. The graph view and VS code Marius says that's right. The graph view is a massive project. And as with a lot of things in the UI, it turns out that like the rendering of a graph is pretty easy. It's getting the right data to the client at the right time that's hard. So this is per the 23.2 note here. Really kind of more of a back end project, but as an exploration or precursor to this we're working on the tool input out the job input output display showing data sets that are related to a particular data set in a better way in the in the history. That'll be in 23.1. So the whole toolbox search and modernization there was a bunch of progress on this that got back ported to the previous release in fixing tool search. But building on that, we want to have super fast filtering of the toolbox in the left panel client side only so like as you type it filters in real time and you see immediately what tools match and what tools don't. So we have an advanced results panel that pops out that does the full depth, searching the tool help and all that stuff with rich results in the middle panel. So that that should be merged pretty soon and available. There's a brand new multi history. That's already merged. It builds on the, the new history. It uses instead of the new history that instead of the old history of the new history does the same multi panel thing you can pick. You can pin particular histories that you're interested in and those are preserved so when you go back to the multi view you see the things that you care about. Obviously you can unpin them to. So that's that's gonna be real nice for users drag and drop works really cleanly in it. It's very nice. And swapping that view over enabled us to delete the entire old history. So that's great. Lots of legacy code gone there. There's now a unified export UI this was sort of prototyped in a couple of different ways with the bio compute stuff in the RO create but now it's like one coherent framework where we can plug in more export things and just it's much more it's well encapsulated now right so the logic for exporting artifacts from galaxies is coherent there. So that's that'll be in 23.1. So a lot of work towards general modernization. We shifted to view 2.7. I think about this kind of a lot like Python 2.7 where it lingered around for a long time just because of library issues and things like that but it's really kind of the same thing. We're pretty happy on to seven because we can use the composition API and a lot of the nice things from three, but we're not forced to abandon any heavyweight dependencies like bootstrap view and things like that right away. So that'll that'll keep going. We'll have PNN and there's already a lot of composition API stuff in in the code base that'll be in 23.1. And we're helping to have just actual straight up TypeScript we can. We can currently write utility utility code in TypeScript there's a little bit of a hitch with trying to kick into view components writing TypeScript. Webpack's not wanting to compile them right but we're working on it. But we'll have a much more modernized code base moving forward with types. So client navigation will be dramatically improved. We've eliminated a lot of the entry points it's a single page app for the most part now. There are a few reloads. Some of those are for performance reasons. And some it just makes sense to have a couple of entry points depending on what you're doing. But it's kind of a single view tree now which is nice. We've worked on accessibility a ton. When we when I first checked a couple of months ago the, you know, loading up a basic history there were 100 or 150 random, you know, violations and now we're down to zero on the initial page load. And we need to, or at least we were a couple of weeks ago, we need to expand that within the application make sure everything's compliant. We also probably need to pick what compliance standard we're going to stick with whether it's double a or triple a there are a couple of nuances there but but yeah the tons of progress is made on that and it'll be available in 23 one. We want new visualizations D3 based stuff to actually see where all your stuff is. This might be tree maps, probably start as a simple pie chart or something like that you can click through and actually see what's consuming your your your history or your storage space. We'll have a new tabular render. It's a product it's going to be a prototype for visualization based. So the idea is in 23 to visualizations. Visualizations will be sort of first class right when you click the eye on a data set you won't get a response from the web server that says this is how this thing is displayed, and you just get it. What you'll get is an interface that queries a display API that says, Okay, these are the different ways we can look at this thing. Here's 10 different visualizations that are suitable right. So the new tabular render a tabular visual is a tabular display of data set is effectively visualization right it's just as simple as possible one. So we're going to use that as a prototype for this first class visualizations thing. And that but the tabular bit itself will be in 23 one. We'll finally address the remaining it display issues so a lot of those are pending on the, the new history now that that's in we can follow up and have a nice it representation in history with an accurate status that actually we can look at like my running visualizations we can we're going to update that to where it actually polls and things like that there's there's a there's a nice issue with a bunch of bugs there that'll all be fixed in 23 one. And then finally for 23 one, we want pre built pre built client for for production releases, just the mainline stuff so we'll publish it on npm, it'll automatically fetch install it. The linchpin here has always been the visualizations how we actually want to handle those so all this ties together for 23 one. This will only be the main web client we're not going to worry about the having a full client and visualizations thing shipped with with with your, with your galaxy, but eventually. So for 23 two will will obviously this depends a lot on the back end like I mentioned will have a graph you have history. So what we thought was a nice first step towards this. Or second step I guess since the input output thing is first step is to reuse the new workflow editor is a way to look at a single invocation in a graph view, right. So we'll be able to see, given a workflow invocation scroll around you look at it it looks like a regular workflow in the editor but you can click on things and see the state of different steps and that kind of thing. So that'll be in 23 to notification framework, we had an outreach you project over the summer to build out a notification framework which is something that's been requested for four years. Sorry chat distracted me. So we're going to pick pick that up and have it finished off for 23 to the big thing that's users really won't see for the most part but it's something that's really going to help us. It's going to make it a lot easier for us to carry less baggage is that view is going to be the sole framework in the primary app there won't be any more boots anymore. Backbone, no more jQuery and minimal entry points so it might make sense to have a couple of entry points depending on what you're doing, but backbone and jQuery have completely got to go. The main things outstanding for that are our grids, the upload component and form elements, but works being done on those and I expect it'll be done by 23 to so for GCC. The first class visualizations thing that I talked about. I won't reiterate it. So that'll that'll take a display API from the back end group, which, like I'll talk about it's probably UI UX folks with two hats on, but it's, I don't know how to assign that. And then. So, we talked about this before but tractors been sort of maintenance mode only for 23 to want to have with the accompanying the new first class visualizations, I gv.js replacing tractor is sort of the default click and browser genome right or browser file. So, the auto generated typescript client for the galaxy API, we should have for 23 to this is a conjunction with the back end it really, there's almost nothing to do on the UI for this to actually get it done. It's mostly just consuming the back end in a structured way, building on all the work from from the swagger schema. But basically we can automatically generate Mary show this off in a really cool demo at GCC but we can automatically generate typescript bindings for the entire galaxy model, because it's annotated well right. And then in the client we use that instead of trying to use axios and talk to the API directly and you know all this stuff so that that'll be really nice and it'll be published as a separate package on npm as like. It's kind of like the bio blend objects, but the JavaScript version. That's kind of what it is. So that'll be really nice and it'll enable us to, to improve testing and all kinds of stuff. I'm sorry I've gone way over five minutes. So another big project is the archival or frozen histories. Again this is mostly back in there's going to be some front end to it and it's again kind of the same people working on it on both sides of the fence. And then support for that, and then formal testing for accessibility. So I mentioned where we want to be compliant for 23 one. But what we really need is, and we have a strategy laid out to do this to continue ensuring that the app is compliant is to have a real testing for using acts in a like in in a. What we want to do is run through the tours, and at each step, analyze the page and see what's going on and say okay this thing is compliant it's not compliant this this needs labels this doesn't. And we'll have automated testing for all the accessibility stuff. We already have the linting, but the e to e to end to end stuff is is what we still need to do. So yeah, challenges. Yes I'll start with the second one first. It's one thing that's going to be challenging this time is last last time around our mandate was basically history history history history. So everyone was looking at the same thing and pushing on the same thing. As you can see there's a ton of stuff on this slide so one challenge is just going to be trying to organize and keep track of stuff and make sure it all gets done. So one of the challenges there's kind of a back in shift to compensate for all the time we spent on the UI but this isn't really a bad thing as you see like most of the stuff on there were a lot of the stuff on here has a significant back in component so I don't think, I don't know, I was kind of digging for challenges to put on here since you asked for, but that may or may not actually be a challenge. I have a question on your list, for example 231. Is that in some priority sorting order or no. It's some of it. It's kind of priority, and a lot of it's actually already done, because 231 is going to be a big release. If you want, I can go back and annotate like what's done what's remaining to do that kind of thing. We can talk about this at the UI meeting and just gonna bump it display issues higher. That's definitely going to be in. Yes. I guess I had a variant of the same question which is, I mean this is awesome. There's so much going on. I guess I'm, I guess like you said, you know, because there's so much going on, you know, and it's not hyper focused around one activity. What's the strategy for just keeping organized and keeping good progress and it could be, you know, maybe have a prioritized list with stretch goals or maybe there's some other strategy you had in mind. I mean the thing that has helped the UI group. The most, I think in recent times is having our weekly meeting, we completely shifted the cadence from having a meeting once every 123 or never months to, you know, meeting every single week. Yeah, making sure stuff's progressing and just sort of staying on top of it. I mean, it's kind of experimented with like the project boards and things like that, but basically, I might might. I don't know, I think about the project boards is a great way to get your thoughts organized about something, and then nobody ever does anything with it and you throw it away. It's not a wasted. It's not a waste of time to do that, but for me it doesn't seem to be a great mechanism for like, actually keeping track of stuff. I think it's better that I don't know. But I think our new meeting cadence combined with the, everyone's really been great. I'm super happy with the UX group. I don't think we'll have a whole lot of trouble actually hitting all these goals. Wow, that's awesome. We should move on just for time but I'd be interested to kind of get a preview of all these features at some point. Thanks so much, Damian. I'm at the start of testing and hardening. All right, testing and hardening. It's me and John. So our group is slightly different from back end and UI UX so we don't have a long list of items we check off as a result I organized we organized our reports slightly differently to us that made more sense. So first of all completed since last report release testing 2205 we covered a lot of territory including we covered all the key features for 2205 thanks to really really great release testing team so thanks for that. And second, writing automated tests for galaxy this is our new tutorial covering for now on the unit tests and API test it's still approximately at three to four hours long. We tested it at GCC at training it went very well. We didn't have a pack a packed room but the attendees who were in the room they, they stayed for the duration and they seem to have enjoyed it. New tests and testing infrastructure this is ongoing work this is very similar for any reporting periods so only some highlights. They have been and again these highlights don't mean to be exhausted there is a lot more. There were extensive tests for the new database migrations CLI system. A ton of tests as part of toolshed refactoring and also there was a lot of great work modernizing the testing code done by Nicola who is not even a member of our group so we are not taking credit but we had to mention this. And I'm sure I'm missing a lot of other work which took place during this period. So for GCC 2023. First, the testing tutorial again it covers only unit tests and API tests for now. It covers them in depth we probably need to add the other main tests at least we need to add sections on end to end and integration and client test and of the client testing. And this will definitely push it into six to eight to nine hour territory so when we use it at GCC will simply pick and choose, depending on maybe audience preferences but again that that tutorial needs to be in GTN so that individual contributors can use it for writing tests, learning how to write tests, ongoing work on testing infrastructure. So, and I'll talk a little more about it in the interactions with other working groups but the specifics will be based on other groups needs. Back end specifically UI UX systems, most likely, as well as our own work on primarily back end projects because we are all of us on members of the back end group and that's where our mostly primary focus lies. The anticipated focus for now and again that may change based on the priorities of major projects carried out by other groups, but for now it's going to be a testing infrastructure for revamping the overhauling the tool shed. Second it's going to be a lot of work for the data access layer and that is all things model database SQL alchemy and also end to end tests. There's nothing else to support focus projects by other groups release testing 2123 to since both of this happens before GCC 2023. So our plan is to freeze early to start at the freeze and that will leave us three to four weeks for addressing any issues raised by the release testing before the release is announced. And to be better at reporting in time so that stuff gets fixed before it's announced. We will try to explore ways to reduce manual testing where automation is possible again this has been an ongoing concerns since the very first time we did formal release testing. But we don't have the right amount of manual testing we do too much we do too much repetitive things, which can be automated we just never have never had the bandwidth for this so we always put it off until the next time and as a result people get bogged down and boring work. So we will try to start working on it this time so 23 one hopefully will have at least some result on improving this. The second thing is we will try to run some form of load testing in parallel with release testing. We haven't done it yet. So we are trying not to be ambitious but again we are trying to which from having discussed it every single time we plan or listed release testing to having done something and then we can improve so hopefully we can start with 23 one and then improve it in 23 to continue with other groups. So we've had in depth discussions as a team on whether to expand the group and how to expand the group and decided not to, or rather we decided not to make that a primary focus so as a disclaimer of course everyone is welcome, we are happy to welcome new members. And actually we now have larger meetings it used to be just the three of us me, Marius and john. Now we have more, and this is great and this actually helps us a lot as a team so everyone who attends the meetings. Thank you. But many of our whenever we come up with a long list of to do items many of these items require complex writing complex tests and often those complex tests require modifying our testing infrastructure or developing new testing infrastructure. So essentially, no matter how horrible it sounds we're not new be friendly by design. So instead we decided that the better way the more optimal way to serve the community would be to focus on reaching out to other working groups and providing a test engineering service. So we help with writing tests we especially writing complex tests, and again focusing on providing sufficient testing infrastructure to meet other groups needs. Of course we also encourage individual contributors to use galaxies testing utilities and also another important focus is to develop keep developing more helpful documentation on working with the galaxies testing infrastructure and documentation on how to write test how to use galaxies abstractions and many tests utilities. And also in addition to that will be developing guidelines for utilizing the testing infrastructure on galaxies automated tests on running into instances, both both both production and test. Finally, the long term not scheduled but constantly discussed is more structured performance testing which would include load and scalability stress fault tolerance, et cetera, essentially just to test things like speed capacities scalability stability security. We have discussions on this we plan to address it eventually we don't have the bandwidth yet but this is in the plans. John did I miss something. I think I didn't. So that's awesome. No, it sounds good to me thanks so much for presenting that. Oh, and thank you for the link. Yes, work by Australia lands. So thanks for that john I mean it sounds like things are going well. I'm excited that other working groups are stepping up to kind of, you know, contribute or get advice from you all. Are there are there other places where you need help or attention or, or where you see that there could be some pitfalls ahead. I think not until we have really started implementing any serious work on different types of performance testing. Once we get into that territory will need all the help we can get. Most likely from systems back and then you why. Perhaps ignorant question so you talked about the tuition. Refactoring. There must be some issues or, you know, plan for that. Can you send a few links. John. Yeah, I will. Yeah, I'll, yeah, I'll talk to offline and then. Thanks so much john move on to the work those are next. Oh, Mary, if you're muted. Yeah, it's been quite a productive period. So, especially, I presented some of the IWC work on the European Galaxy days and following that was a hackathon that was extremely productive. And out of that hackathon and some more work. Last week and this week. We merged workflows for chip seek attack seek cut and run RNA seek. So that's something we really wanted to do because I think they're really core workflows in in maybe the the original audience of Galaxy let's let's call it that way. So, yeah, I mean I was really, really cool. Also huge thanks to Lucille. These were her workflow she was already using them in production. And what we did there was prepare them for IWC standards which means adding in all the annotation, replacing some of the, you know, complicated all steps with some expression tools we created a new expression tool to make it easy to have nice clean workflow input logic. Yeah, since the last time we've always we've been doing updates and fixes to the existing workflows and we're now at 22 work across in total. We also discovered that there were some issues with using reference data via CVMFS on the CI instance so that's working now as well. And all of these workflows new workers are actually using this. Yeah, I already mentioned we have a new expression tool that maps any sort of value to any sort of other values so if you if a step produced an output. That's maybe the string falls you can turn that into Boolean falls and connect that to the next step. And yeah, I mean this expression tools are nice. They're regular kind of galaxy tools, but instead of a command line they have a JavaScript section that defines the output parameter. This tool I'm talking about is kind of just the dictionary implementation in a sense but it's kind of complicated to write as a galaxy tool. So we'll come back to that but we need to improve on that. David has created a VS Code extension for Galaxy workflows. So this is really, really cool. It's available on the marketplace it works with your local VS Code or it works with the GitHub editor that you get when you just press dot on a GitHub repository. It's really cool. And there is an improved work for import page I did that. That will make importing workflows much more streamlined in the interface there are fewer clicks to be done. So that's great. The next slide. Yeah, so for GCC 2023. We have a couple standard ones like the one that takes a value from a file and turns it into a parameter of any type and other one that is basically a dictionary implementation. And one that lets you combine different parameters into one. So we have our bows and it would be great if we had a text field where you can just enter that expression and then editor implementation that guides the user towards the available. There's some feedback here. That would guide the user towards the available variables that are available in in the job itself. Similarly, I think we really need conditionals. The workflows we've added now made that really clear. We have friends. I mean with conditionals, you can include for instance optional QC steps. That you may not want to run all the time but that kind of belonging to the workflow, or if you have parameters that need while the, or steps that need widely different parameter settings, based on whether you're dealing with things with single end or paired end reads. Conditionals would be extremely helpful there. And I think we can probably get this on the API level in in 23.1 and then target the editor in 23.2. And then already mentioned, we're rewriting the workflow editor with the goal of taking it and using the display component for other purposes. So one would be to embed it in in this code. So those people that like to write their workflows manually and you can do that that in this code or see your changes or possibly produce a visual diff of the workflow since the last version. It will be super useful for static page. And it would become very, very useful if the toolshed can augment the information in the workflow with the tools, because the galaxy worker language does not include the tools it just has references to the tools, and the states and the settings of the tool itself are in there but without any context, so instead of dealing just with a JSON blob you could actually see the, what the settings for a particular step are. We want a static page listing of our workflows so that could be part of hub or could be something else. I guess this is to be seen what we're going to do there. We also recently ran into problems when some of the steps would only work with like, I mean, we're kind of resource intensive like I don't need a lot of CPUs, a lot of memory more than the GitHub worker gives us. I think it would be a pretty good use case to start using the global Pulsar network so then we register a user account for the AWC and submit specific steps there. And another option there will be AWS batch but not everyone can get this and ideally the pattern we set up with the AWC can be followed by other communities. So if we could use Pulsar that will be amazing. We want to produce our create invocations. Import export part of that is already merged part of that is yet to be done and we also want to publish that with each published workflow so that users can see the work itself read me the changelog, and an example of what it looks like. We need a schema for the job and test definitions. This is a little in the weeds but when you start writing tests. It's not immediately obvious what options are available and we may run the entire test that can take from a couple minutes to couple hours, just to see that it is quite the right test syntax that is extremely frustrating. Yeah, and then interactions with other working groups, UI UX for the workflow editor, expression editor back in for the workflow features and the toolshed serving the tool state. So I think that would be a good fit for the work flow group that they are currently working on, and I think there may will be, I don't know, merger with the tools working group, which seems to be doing project specific fit. Just one note or question. The work flow group needs to prioritize announcement of new workflows. So I think if you're thinking about static page on hub. Let's do this soon. And, for example, all these new workflows, the single cell stuff is that all tweeted. I think we have single cell ones. That's regular transcriptomic. No, it's not, but we also just rich them now. Yeah, I mean that's definitely the plan, tweet out describe what they're doing what they take as inputs. So you have this idea to do these Twitter like so actually you can get all the information by just reading the tweet, which is the absolutely correct approach for this. So we should start soon. Yeah. I think we put this actually in our last meeting document Martin and I will will write some tweets I think. Could you could you say a bit more about what it will look like to embed it into the hub. Is it sort of the graph visualization of the workflow or is it more than that. Yeah, I mean so if you go to the individual so each one of these workflow gets a proper repository that has a read me change log. So that's the first part that we can display. And then, yeah, I mean, it's pretty straightforward to just do like a set escape visualization of the workflow. Ideally I'd really love to embed the workflow editor in there as a read only preview component but that lets people see the workflow as they would see it if they import it into Galaxy. So the GitHub repo so that if you wanted to, you know, have text and other images sort of available that could be done there. I think we're probably going to do some scraping at first. So, use the like it a page infrastructure but instead do this via central CI job. Metaphorically, it smells a little bit like a Jupyter notebook but for a whole workflow, right, where you can have the code you can have, you know, text describing it can have visualizations and then the but the big button, you know, go run on 1000 samples, which is. Yeah, I mean, I guess. Yeah, if you look at the doctor interface I mean it's kind of like that right. But I think we need this also for just for Galaxy workflows so I mean, my mental model is of this is still how and of course does it. If you've seen this, how they list their workflows. I, I think this is pretty good if we can get there. Cool. Check that out. And it's, I mean it shouldn't be very hard. We have the material, the hard part is describing, you know, the read me in the changeload and the inputs and the outputs and the considerations and what you need to take care of. And that's done so yeah, I mean I totally agree with Anton, we need to publicize this. Great, great. Thank you. Next up is tools. All right tools working groups. It's pretty. Short mainly we have primarily been continuously updating and supporting several projects, the VGP, for example, we've been working on some code projects as well as some independent tools. But to the end, most of our work has historically just been support for projects. And we're looking to really formalize that by merging with the IWC. Where we will work on tools with the I specifically of putting them towards large scale workflows that can be published like this. We will continue to update tools as necessary, but having that having direct goals for every tool is something that I think we all would like. This changes a little bit of how the IWC is working group might maybe it functions a little bit mainly in that. There could there will be a little bit more of an assigning to people who are on the project of working on specific workflows rather than people working independently on a workflows and then a workflow structure in general but we do want to talk through with everybody involved and see how this is going to function which is why I have not added a goals for for GCC 2023 slide, because we still need to discuss that. So we're, I put up a when to meet poll. So hopefully meet at some point in the next week or so to discuss this with as many people as possible and figure out how this is going to work in goals moving forward. And what does IWC thinks about this. I mean it makes a lot of sense I think currently our meetings are mostly people heavily invested in the infrastructure side so now we've built it. If we can then also attract people authoring workflows that would make a lot of sense. I don't know if there's some tools missing for those will close to be awesome then it's great to have people that have capacity to do that. This also like, for example, a large part of the work on the VGP has been working towards these workflows and as Marius and Delphine have both separately stated, it is, they have to go through one another for all of the issues for anything that makes it a central place where people working on these projects have one place to talk about the workflows and the tools and things that are necessary rather than having to go back and forth between a tool between a project tools group and the IWC. And also we used to have the IWC meetings at the same time spot as the VGP meetings so I think that was also a bit of a unfortunate disconnect. You know from my perspective this seems like a positive change and I think it's, I think it's good that we're self reflective on the works, you know for the working groups and let it iterate and evolve you know as it makes sense. Thanks so much. I guess we'll either rename the tools update for our next progress meeting then up next is Goetz. Alright, so, starting with the training part of the Goetz group. The Galaxy mentoring network is starting slowly we have three paired mentor mentees group. One has started the project. And I think two are waiting for the first. I mean, they have been paired they know each other, but the project hasn't really started yet. We have nine applications in review. Once we have reviewed people who are interested and have a project defined enough, then we look for mentors for this application. So that is the next bogus book was going to take place in spring. We need volunteers as for every year, especially since we're going to need new videos since the new history has been released. Videos need to be updated. There's a lot of train the trainer event happening part of Gal entries there's a training event happening in springs. And then there's a course building in beta for people to organize the workshop and plan for the training. Next slide please. On the details side we have new material, including microbiome data analysis. We have whole new topic data science with 35 new tutorials including Python are SQL bash and snake make. We have updated updates in the train trainer category in combination with a train trainer. There's a choose your own adventure tutorial that has been implemented a while ago but I don't know if we communicated about that. And there's been a big work this past few months to improve the accessibility of the GTN websites. The boxes are now replaced with proper. Tickle headings. Sorry. We following the area guidelines for access abilities and we have no support for the preferee just motion function. Next slide. On the outreach side we have, we haven't we have skipped the outreach he intern program this cycle. And we are wondering if you want to participate to next cycle so we are welcoming people who have ideas on what project we could find in turn for. We're discussing taking part of the Google summer, Google code summer event, sorry, I'm missing a word there, Google code summer events, which would be in association with demodes we tried doing that previous year but we couldn't contact them I don't know some miscommunication in the middle. We will need to contact try to contact them again early mid January for this 2023 Google summer code. On the side of community management we're looking for volunteers to be active on Twitter people who are used to Twitter personally I'm afraid of it so I'm not a good call for that but if people are comfortable with it. We're going to try to get better on that. And we're also going to be more involved in the release discussion to be able to spread the task of communicating your ease across the group and not just put that on Elena or be as it was previously. And finally, the last thing the group is involved in is the hub. So Nick is leaving soon he has a new job. We're not sure yet who's going to inherit the management is documenting everything before leaving. One task that is started but won't be able to complete before leaving is the migration of content from you to org. So this is going to be in the project for the coming quarter but we're not sure who's going to be in charge of that are how we're going to deal with the hub management. To be able to get most of it. The both will be some formatting. And that's it for us. So is the outreach is doing great. Have a question just about the course builder. This is first I hear of this what can you say a few more words about what that is. Can I share my screen. So it's in beta right now but basically it allows to select what is the topic how to organize schedule. And the event. I haven't used it played a lot with it but yeah it's it's allows you to to organize planning and link tutorials and having duration and. Okay, okay so so it's for like a workshop to help with the logistics registrations and things like that. So thanks for that Delphine as you know it's always so impressive all the activities. I'm wondering if you have a sense of like. Is there any sort of noticeable up tick or down tick in community interactions with these materials. You know, I think the big events are like really well attended. The first part was like thousands of people for example, but I guess I'm wondering about all the other events. And if you have a if you have a pulse on, you know, how how people how people are engaged right now. A lot of engagement contribution wise for using the tutorial I don't have number need to ask Elena I think she has some metrics on how often the pages are visited, this kind of thing. Yeah, I think we have some number I can forward them to you because there's usually satisfaction polls after trainings that we ask to return. And GCC the training, scientific training has been has had low attendance but I think that's more because most of the attendees where more on the developer side and on the user side, compared to other galaxy conference previously. But I get this number for you. Yeah, that'd be helpful. Again, I'm just trying to get a sense of, you know, we're still moving in the right direction or the things that need to be adjusted. Considering the enthusiasm for train the trainer and the number of participation I think that's used. I mean, especially we've had demand for accessibility. People are finding problem using it so it's definitely used enough to notice these problems. And a funny way that's a good metric of success if people are. I mean, if we had the point where like some, but I mean, I've discussed with people giving training and they're pretty happy with it. But then I'll give you more number called number, I mean, logical number five. Thank you. In terms of tweeting, can we just create a, you know, like a metrics channel within galaxy. So people who basically will dump things they want to tweet and then somebody who has access to Twitter will just go and we have several people like that. We'll just go once a day and see what's there and just do it. This I think would spread this a little bit better. So what's that proposal for that automated system that you just like commit to a GitHub repo and it would take her. I don't automate the system they're like dry. It's, it's, it's like automated the voice when you call United Airlines, I mean it's not particularly helpful, I think there needs to be a twist. I think the human wrote this and I think that's not that difficult if we don't have to dig and find what to tweet if we actually have a list of things. So, for example, Miles has new workflow you just dumps there. Okay, here's a new workflow tweet about that and then we do that. One of us does that. I have to say I think it'd be whatever system we come up with for this would also be useful for like announcing stuff on the hood. Like having we've talked about having some systems that you can just, anyone can just suggest a post for the home. And then you could check something that says like, or tweet this to whatever channel could be useful for that. And we're making it more complicated than it is setting up the tweet. I mean, yeah, it could just be a matrix channels like please, please this also post on the hub if you want, you can stand up there. I don't know if we can view Twitter as we can view YouTube so the Twitter really is regularly and not wave by wave. I think Dave had this setup at one point like there was a key maybe it was in the third party thing he was using or something but there was like a backlog of stuff that he would schedule to tweet out at particular times. And if we just kind of like can compose that library or something. Yeah, okay. Yeah, also did this already so it must still exist I mean she did that for the last release I think. This will only work if it's easy. So I mean if it's, if it's not easy, it's not going to work. It is it's very easy. I think also how Mike also mentioned it as as commits and beyond is fairly straightforward so someone would make a commit to a branch and then if it gets approved by others, it would maybe hopefully automatically tweeted, maybe also on different platforms. I like that idea. Well, Anton, I think what you were trying to do is separate the editorializing of the tweet with the content right like you were saying I want to tweet about this but you want some someone else to come up with like the right words. So you need two cues one that just gets the ideas to the editors and then another cue that actually publishes right. Well, I was hoping that, I mean, I think we're spending too much time on this but the basic idea is that if there is a blob eminence again, Mario sort of somebody so here's this workflow it does this and just you know that's one sentence and then I'll edit, for example that sentence and put it on Twitter. So, so editorial is done by somebody else. I see. Martin shared some link for the GTN matrix. Thank you, Martin, with some numbers on the top. To the next working group which is systems. Okay, so we have made some progress with the total perspective for text. This is essentially the dynamic rule job scheduling rule that that the use galaxy Australia developed in order to route a lot of their jobs to pulsar across lots of lots of different pulsars they've been using it in production for, I think a year plus now. And so after the European galaxy days. We all work together, and the European server is now using it for some tools, and I'm using it on the US server for on test. And I'm collecting runtime data about current tools so that we can make some priority decisions so that quick jobs run on priority resources. So, yeah, thanks to new and Catherine Simon for all their help as we've been working on this. Pulsar and the global pulsar network. So, as I mentioned, Australia now runs a significant amount of their jobs through pulsar. And the same is true the US currently we do and that number is only going to increase, especially as our latest access allocation includes a whole bunch of different systems and a ton of service units on jet stream to like we have a lot so we want to make full use of those use galaxy Spain is now going to be running across is going to launch and run across four different supercomputing centers in Spain using pulsar and in the next year 11 different compute centers that you are going to be using pulsar so it's become becoming a or is a very critical part of the galaxy infrastructure. And so, one of the exciting things that's happened is there have been some improvements in the way that co execution works. Originally from the pool of the summer of code student, and then john took that and worked on it as well but essentially this means that running the galaxy jobs via pulsar will continue to be much more independent of galaxy itself. And we can do some pretty exciting things that don't require like fixed resources and like a monolithic pulsar server that just runs all the time. Next slide. As far as the distributed computing goals go. So Bjorn and folks in Freiburg have been collaborating with people at CERN and discussing possibly using direct, which is a job met a scheduling system to do some of the, the global pulsar network and then there's also this advanced resource connector arc part of Nordic grid now, which also does some level of scheduling I'm not clear on how these components integrate but Bjorn can talk more about that. The distributed data stuff that has been worked on successfully for the last couple of years is of increasing interest so here on use galaxy.org. So our test server runs all of its, all of its data stored in irons as well, all of its new data stored in irons. And then on the main server. We have a couple of users that use it. And cave on is still working out issues as they come up with this deployment but we want to start ramping that up a bit more. And being able to archive users old data off to essentially tax giant tape system. Use galaxy.be Belgium is also interested in irons and the Italians and EU folks in Freiburg are investigating S3 as a back end. We did a bunch of work on the intergalactic data commission, the co-fest after the conference this year. So the tooling is mostly in place to generate the data. Now we just have to get the automation of running it and getting that stuff in a CDMFS. Next slide please. So gravity. I don't think we talked about the last meeting or update but it's essentially a process management tool for for galaxy that that Marius and I revived. It was originally written seven years ago or so, but we got it up and running for 2021 and I've been doing a ton of work to overhaul it in the last few weeks now has system d support, which is a much nicer way to run a galaxy server for production sites, and we'll have a one point release coming out shortly. So this well is trying to figure out what to do with our zero downtime restart so this used to be handled by US GI for us, but now that we've switched to unicorn we don't have that that functionality anymore. And so when when I restart the use galaxy.org you can see some down times because there's some issues with unicorn herder. We've tried to address those beyond and I but they have not been looked at by the unicorn herder maintainers because essentially I think they abandoned the project and there's some other drawbacks to this unicorn herders so based on an idea from Helena use galaxy. You is now running multiple to unicorn processes concurrently. Miram beyond about this working. Last week, and the way to see then do zero downtime restarts in the future will be just to restart those in like around Robin fashion. And we're working on automating that in gravity, so that you don't have to do that restart process yourself. So our ITs. So ITs were broken on use galaxy at work for a long time because tax Kubernetes cluster was broken for a long time and it still is as far as I know, but Alex, my food set up a little bit on gesturing to using our gesturing to allocation, but he set it up for us and it is all it that's where we're currently running our ITs and it's working. You and as well as use galaxy or have made some improvements in the way that we run ITs because of tail scale. So I don't have a ton of time to go into right now, but essentially, I can, we've created a like a VPN. It's very trivially easy to create a VPN between the Kubernetes cluster running a jet stream to and the job handlers attack. And because of this, we removed an entire layer of complexity of the IT proxy that you have to, we had this system where you can forward requests from one side to the other and you don't have to do that anymore because of this magic with with tail scale enables us to do a lot of cool stuff. Next slide please. So, a lot of work has gone into in the past few releases, moving asynchronous or long running tasks to asynchronous salary tasks and the production servers that use galaxy star had not really been using this much. But as of last week, use galaxy EU is actually updated this slide, while it was loaded so you remember when I wrote Oh, Australia has deployed it on their dev server. And then we're not yet. We're not running it yet in the US but we will mirror developed a role to install flower which gives you this nice monitoring dashboard and some management as well. And so, this is all starting to look great so there's a lot of talk among the employers that the use galaxy star folks about how we're going to run the salary how we're going to, you know, limit it how what to do with this. And I'm not sure that we've come to concrete answers yet but for now we're trying to set up at least a separate machine, separate PM where these things run. And we'll go from there. All right. Next slide please. So, as far as the future goes, we need to finish deploying TPV. I'm working on this as I said I'm collecting statistics about run times. And we need to enable the full salary deployment on Australia and the US, I probably need a new VM for this but I need to do some beyond upgrades anyway so we'll get around to that. Currently use galaxy star none of the servers use the extended metadata stuff, especially with pulsar. And the reason for that is that you have to have a clone of galaxy that you keep up to date on all of your pulsar endpoints which is a huge pain in the butt. And so really before we're going to deploy this in production I think we need to just have be able to use packages galaxy packages rather than having to clone it. We're actually most of the way there and the components to configure this and galaxy are there, you can set the metadata command and stuff so we just need to like document how to do it and try it out. The interactive data commission, we really are going to get this working in the next. Hopefully next cycle but before the GCC, who is desperately needed. We're going to update the admin training for for the new architecture that you know corn stuff. So with with wire garden pale scale and then try to alleviate and do less work around to the limitations in in galaxy by pushing more of our problems I guess up to the back end group. And one nice thing that I think we really need to get going here is a is point releases. All the tooling is there to create, you know, so essentially like 2205 dot one dot two dot three. It's just nobody ever makes the decision to say we're going to have the next point release right. So we need to do that. More tools will use containers and we're going to figure out this met a scheduling problem for pan galactic jobs, whether we can just use tpv for this or we need some additional layer and we'll have to install galaxy. And that's it for us. Are we ready or are we going at some point to advertise pulsar is solution not only for galaxy. Maybe john can talk more about that it is very. It was intended not to be very galaxy specific but it is very specific. But also, you know, there's work being done to, you know, be able to run more general test jobs and that kind of stuff through it. So, I don't know there's a lot of work in this space and I don't know if it benefits us to do this with pulsar or. I don't know, john, do you want to say anything about that. I mean, yeah, my, my intuition says no right like it's like you said there's just there's so much stuff in the space and they're studying without the baggage of galaxy right. So if you want to just schedule containers or just run jobs and don't care about all galaxies internals and specifics, you can write a lot cleaner code. And so I just, I don't know why we'd want to compete with you know other test implementations or, you know, whatever. Yeah, as Mary says that co execution. So I've been moving in the direction, anyway, to have pulse also serve a bit more as like the glue stuff that can translate galaxies specifics into stuff that can be used with other container scheduling technologies. So, right, right. So I was a little confused about one point so I think you said something like 70% of a use jobs from pulsar and there's a lot of activity. And the US and Spain and so forth. And then you mentioned that there was ongoing work for distributed computing but it wasn't clear to me like, is that in development is that running in a limited capacity is that running in production. So I was confused about that status. It is. We're very much running so each individual galaxy has a network of pulsar servers that that it is using fully in production at this point, but there's a sort of a bigger project using to expose the global pulsar network to more global, more galaxies. And we don't currently know how we're going to, you know, schedule things on a higher level. Right. I see. So it's kind of the design phase of like, there's agreement that this would be a good thing. And you're kind of working through the technical design of like how this should be implemented. So it seems complicated when you start thinking about, you know, user identity, data management, compute management. I agree it's a good thing but it sounds complicated. Yeah, yeah, that's why we're looking at some of these other projects like dirac and art because they already handle that, you know, sort of identity stuff. They had a whole week long, well, multi-day long hackathon to just work through user ID management and yeah. They're in a different space, but I think we can use what they're doing. I will say there's like momentum around some of the GA4GH standards, like RAS passports. You know, so, you know, whenever we can beg borrow and steal standards from other people, I think that will make our lives easier. For sure. Yeah, we don't want to implement things ourselves. Thank you, Nate. And last but not least, it is back end. So now we're at the working group that implements everything for themselves. And I think the, we've divided it into sort of goals for 23.1, 23.2. CQLACME 2.0 is going to happen and earlier we can move there the better it is because I think for us, the immediate big benefit is everything coming from the database is completely typed. But in order to get there, we'll have to adjust some of our patterns and make everything work. But John Davis has been on top of it. So he's positive we can do it soon after 2.0 is out. So better one was just released like two, three days ago. So we'll see where we are. Then something that comes up a lot is the somewhat unbounded growth of the Galaxy database and that we may have to remove some items directly from the database or either drop some parts or some content from tables or drop columns and other optimizations like indexes and so on. So he's going to focus on that for 23.1. We want to have the invocation export import really solid, we already have history export but we want to wrap this up in and with our crates, which is a little bit more structured than our somewhere. We want to have a proprietary format that that we have. We want to produce locked and archived histories. Pulsar container scheduling. So John already did a lot of work on this, taking the test approach that was a galaxy job runner and made this into Pulsar job runner. And I think there is a little bit of this miscommunication that was there always needs to have its own server that's not the case. Also also has a lot of the glue code to adapt sort of the galaxy patterns into something that can be used with container scheduling technologies. So that's a very exciting direction and then we'll also make sure that extended metadata works properly in Pulsar and also that it's easier to install. Yeah, something that came up in cooperation with the workforce group is workflow conditionals. I think we really need them. We should also have step input expressions. It's somewhat similar problems to solve which is JavaScript expressions, being able to write them in a guided and validatable way so that we don't throw users on the bus. We don't want to use a core user feature but I think we really do need it. Yeah, we want to enable updating docstone workflows from the user interface. There are some details that need to be worked through with what we do with metadata and how we organize our database models for that. We're also able to support modifying the sub workflow in the parent context so when you're editing workflow that you can jump right into the sub workflow as well and change things there. When a work for education fails to schedule you just know that it failed. That's not great. We need to do a lot better there. So, yes, that's not good. We need to improve that. We also want to make the step job progress more transparent so when you look at the progress of a work for invocation. This is currently split into, you know how many of the steps have been scheduled, and then the each step may have a number of jobs associated to it. So, we want to make that easier to to see from the work for invocation page but then also when you click on an invocation that you see immediately okay how far along are the steps so this is something that isn't terribly hard to do but we need to find efficient queries to do that. Yeah, then I already mentioned we want to support graph based views of invocations for 23.1. So, again, this is a little simpler than doing this right for the history because we already know the structure that's defined by the workflow, the inputs and outputs. Oh, yeah, I mean we can work out the data structure and hopefully learn something about how we want to do this for histories as the next step. And then we want to also do the final push for jx format to as the primary workflow format that users get when they download work across from Galaxy because they are much more hand writable readable. And I think we can build much better tooling around this. There are some some details to walk through. Next slide please. Okay, and then for 23.2 so john's already busy with this. Yeah, I mean there's a partial rewrite of the tool shed, essentially ripping out old parts that we don't need or that we shouldn't be needing. And then in return adding things like serving the two states so that when you work with the workflow outside of Galaxy could just get information about the tool from that particular to shed API. This this will be very powerful I think. So, in 23.1. I already mentioned conditionals and step input expressions will start with just having them available in the API and in will cross imported from jx format to. But no sort of user facing option to do this. And then in 23.2 because again as I mentioned we don't want to throw users into the bus. There should be guidance, you will have a job object object that you can explore. Visually, with other completion and something that will tell you this thing doesn't actually exist so your expression is going to fail. We want to modernize the two state. This is a topic on its own. It's both removing a lot of legacy things that hold back Galaxy and will hold some. It's a little abstract that will bring a lot of improvements down the road will make it much easier to reason about parameters in all sorts of different contexts, but for instance it will be critical to produce accurate work for extraction. I mean, we have this currently you can extract the work from history but it's not quite as precise as it should be. This will help a lot there but there's many other reasons why we need to modernize the two state john or to the entire document on this. We want to enable in the user interface for users to select the object store where new data sets are to be created or where data sets within a history should go. So that could be scratch storage permanent storage so far as other things. We want to restructure part of the API and how we're serializing it so that we can ship off to the client the models that the API is going to produce so that when you start working on something you can auto complete to see the attributes that will be available. We will also know which ones are optional so you can take proper precaution. And know that your code will work in all situations and not just the one you tested. And at the same time it was gives a strong guarantees that when we change the API we're not breaking the user interface. There's a lot of things we want to make a sync so take it out of the web request cycle that can take a long time so this will, for instance be to a submission if you are mapping over a giant collection. In the back, sort of prepare parts of the the job itself, create the outputs before returning. We want to make that instant and instead track the progress asynchronously that is going to happen in Saturday in the back. Other valuable targets there are history copies or imports that can take a while. These were instructions for me to prepare these slides. Yes, I only mentioned this. Yeah, and then in 23.2 would work on the data for the graph based history. I have a couple of questions with other working groups, you are you X, that's for the location history graph you will be working together there with the workplace working group for workflow features and to a shed state, and systems for deployment of salary gravity fast API unicorn, all these things. You have incredible amount of bullet points here. It's really feasible. Yes, I'm going to say it's, it's feasible. It's also ambitious. The thing is, all of this needs to be done. I mean, I don't think there's anything on here where we can say, Well, let's just post that off to next year. So, I mean, it's ambitious. I think some of it will probably need to push over but you know, we need to put on also things that take more than a year to do and work on them in smaller pieces. Could you say a little bit more about how you see the RO create fitting in versus, I don't know, the J format or publish histories. It seemed like they have somewhat overlapping goals. Yeah, I don't know if I'm the best person to do this. So what we're doing. We have a plugin system where you can put in your different formats and flavors of the hour. It seems that our create is checking a lot of the boxes. In terms of metadata structure. So there's going to be a workflow run profile. And I think it's something that's catching on. Also as format being used in other external hosting sites. So I think the big advantage I see there personally is that we can get contents of the hour grade without downloading the entire thing. And so could provide, for instance, previews of an archived history and then you don't need to first import it to know what in what is in it, which is currently the case with how we export histories and workflows. It's nice to work with standards. Basically, the thing. Now which standard that is, I mean, sounds a bit hard, but I don't care. But you do see, there is a community around it, and there's tooling and there's support for it. Yeah, I mean I think we have good links with the people that are working on our created self. So I think this makes a lot of sense for us. But I mean at the same time like we also support bio compute for instance but that's a somewhat different they don't. I mean there's not really the art artifacts attached to it. So it's it's more of a metadata description, as far as I understand it. Yeah, thanks for clarifying. I think my addition there would be, I mean I think maybe people oversell our own crate and in our maybe implementation of it. But I mean, it would be nice to see some tooling and when that tool is available be nice galaxy could talk into that. And I think ultimately, like Mars comment that like I don't care is kind of part of that is, it's pretty easy right like, we've got all of our core stuff packed into the arrow crate right so we're we're doing some stuff to annotate some data, but we're still ultimately sort of importing the history the way we do it so it's not. It's not adding a ton of code, but it's making things. I mean it just feels like good PR. Honestly, I mean it makes other projects happy and I think that's that's always a good thing to do. And it's not a huge amount of work is what I'm saying to support all of these like the infrastructure to support asynchronous workflow exports took 10 times the amount of effort as like, let's add a new format that we're going to export, you know writing the dashboard David writing the dashboard and building that UI story is so much more work than just sort of, you know, adding another checkbox of what kind of format we're going to export. So it's, it's good PR it's low hanging fruit. I think. So, I don't know if this is a specific comment just towards back in group, but to all the groups obviously. You know, I think every group is has presented a lot of great plans and some of them are quite ambitious, I think across the board. Is there an overall plan on how to sort of work well to find out sort of the, not necessarily easier projects but maybe somewhat easier but shorter length projects to sort of help on board people that are not necessarily as familiar with the underpinnings I mean, we're looking at the back end so obviously galaxy itself as a framework is quite complex. If you've never worked with it before. And so, you know what, are there any, any sort of steps for how to get people to actually who have not been, you know, here for five years or longer or whatever to get involved with some of these these sort of smaller projects. I mean, the, it's not that hard to get into it right. So for instance. Okay, let me. We don't have a list of issues that are ready to be taken, because we're having that issue is like okay, somebody's going to do that right. So I would say that the working groups are a great environment to get started, and to guide you. I mean, we can pick something that that fits you. So most of the time external contributors coming in have a very specific thing they want to do. So, you know, that's, that's the thing and we may or may not be able to help them there based on how feasible that is. But like a new developer coming in. I mean, yeah, I mean, do some API routes I mean put them to fast API this is fantastic learning material. And I think our recent people that joined the team that are doing back at work have done this in a fabulous way. I think that's, that's an easy win. So yeah, modernizing some of the API routes you read the old code, you see where we want to go. The implementing salary tasks are also relatively straightforward at this point. But I mean I wouldn't really say it makes a lot of sense to have 10 good first issues because why we want to do them anyway that's the same problem we had with the paper cuts. Sure, yeah, absolutely. As soon as you find what it is, you might as well just fixed it. Yeah, I think we had the, I mean, so like if there's somebody wanting to do a project. We can come up with a project in a sense that's how the notification framework started. And like if somebody came up now and said I would, you know, want to get started with Galaxy development I'd say great the notification framework I mean it's almost there. And involve both back in front end. That's the thing to do. I think also this is a tangent answer but also with regard to onboarding new developers, at least comparing to the time when I came in I think things things have gotten significantly better and it's significantly easier for someone completely new to the galaxy code base to start being productive. Specifically because of training. We have the developer section in GTN is significantly different from what it was three or four years ago, Galaxy code architecture slides, they have grown probably twice in size. If I'm not mistaken, and that's a qualitative improvement so it's not that there are, there is more stuff is just that it's more like the slides have become more in depth and comprehensible by someone who is new. And in addition to that we have detailed tutorials targeted at new developers how to write tests how to debug Galaxy, especially how to add a new feature to galaxy which is a tutorial which john wrote it encompasses the whole process from start to finish with samples We probably can improve that tutorial by adding more explanation or about why things are like that. But in general, I think today someone coming in, new to this code base, they have a roadmap for themselves to be productive, much, much sooner. I would say that the developer tools available now are really really good right so you can open get pot and you have working live debugger, both for the front and the back end. We have for you. Sorry. Yeah, tools for you. And workflows in 23 to exactly. No, I mean, I think get pot is a huge improvement there. And we've integrated all these cool tooling things like my pie black prettier that also make it, you know, all this boring things you had to take care of I mean this this saw pretty much we did at this point we have we have a pre commit thing like you. You know, it still happens but it's actually quite easy to to open PRS these days and not stumble through any of the obvious things like linting or types problems or things like that. Did you have, did you have new people in mind Dan or I'm just wondering, you know, I just just in general, right. The other thing is that each of these, each of these tasks are very large tasks and are not going to be done in single PRS so a lot of time people will join a working group, and they will tag along with a senior dev who will say hey I'm working on this. Here's a part of that you can help out with which is a very different experience than jumping into a working group, and one of the devs going. Here's a problem we've got here's documentation go, because it's a specific problem with a specific person that can be spoken to a specific point person and it makes it a whole lot more personable than jumping into our famously large code base. This is where the, the initiative to the, I'm blanking on the name, but we talked about the outreach section, and I'm actually a mentor for it but the mentor network. This is where that kind of thing comes in right you pair someone up with someone that already has a project that can piece parts of it off. I mean, I think. I mean, we did the outreach project. I don't know that I would immediately do another one. Just for the fact that you don't know how much time somebody's going to spend on something. Whereas, you know, if it's an employee I don't, I mean we should definitely pair them up with somebody and get them to work on things. And then do some pair programming. That sort of thing that I don't think we've done systematically in the past but that we should definitely be doing improving the onboarding procedure for new hires. Yeah, one thing on this topic that I thought I had after the GCC is that a lot of the people who are presenting the developer materials or people who have been on the project for half a decade or more. You know, they're not, they're not going to have the best insight about what a new developer needs. So it'd be nice if more of those materials could be presented by more junior developers but you know who have some experience on the project. And then the people who like the newest hires. I think with most part I think didn't show up to the developer trainings, which is totally fine and there's so many good trainings and it's very useful to see what galaxy looks like on the front end and how to deploy it so there's like, I'm not saying like, not necessarily the most useful training for them but it would be, it would be good if, if you know, more people were contributing to the developer materials at sort of that that sort of mid galaxy point and then more new people were attending the seminar, are you know attending the trainings I think both of those things would sort of help improve those materials that john had mentioned. So, we had this past PCC, a talk from three of the new people at Hopkins, the new, new developers who gave a presentation on what it's like at this point to start with galaxy. That could also just be like a standing thing of the new people coming in each year give a presentation of this is my experience getting started as a developer galaxy because it gives us all the perspective that we don't necessarily have. But was maybe that shouldn't be a GCC talk. Maybe exactly what I was going to say. GCC or whatever but some sort of a community presentation some sort of standing every year we get the new devs together, the new onboard people together and they can say this is my experience in the past several months. And I think all these are great ideas but I'm even more more pleased that you guys think that every everything's in line that it's relatively straightforward to get involved. So, I mean, if you think it's all scalable right I mean galaxy itself but also, you know, learning how to work and code for galaxy across the board that's great. There's always question marks around, of course, yeah, you don't know what you don't know, right. But yeah, but you see new people coming in and be super productive. We must be something must be doing something all right. Okay, yeah, no, absolutely. You can't improve it but something works. It may just be the hiring process but actually has his hand raised, but he's muted in case he's speaking. I just wanted to pitch the galaxy mentoring network so we're still trying to get that up. So if you're interested both as mentee and mentor I mean we are trying. You're not supposed to the URL into the chat. Are there any other comments or questions. I just want to thank all the working group leads for organizing and we're giving a groups for putting together these presentations they're always really great to hear your accomplishments and exciting to hear the future plans. Thank you so much everybody, and we'll be scheduling out the next year of working group progress meetings. So, thank you all. Thank you everybody. Also thanks to Natalie for hurting the cats here. Absolutely, absolutely without Natalie this, you know, this will not happen. Thank you thank you thanks. Thanks. Bye everybody.