 All right, so as you may have heard, or maybe not, we are going to modernize the Galaxy API and with it the backend to really make use of some of the really cool new Python 3 features. And all of that is only possible because we've finally gotten rid of Python 2.7s so Galaxy is now fully Python 3 compatible. That was a long multi-year effort led by Nikola and many others of the community, many sprints, many hackathons. And now we get to actually use cool new Python 3 things. And the Galaxy API does need some modernization. So one of the biggest problems we have is that there's currently no way that we can do work outside of the request cycle. So that means, for instance, if you wanna delete a thousand data sets, then those thousand data sets, you send a delete request and then the backend does its work and the front end waits for a response and probably it's going to time out and there are clearly better ways to handle this. So deleting is an example that immediately came to mind. There's also the issue that if you submit thousands of jobs, it's a single post request, but it just takes a while and this is not good. It's not a good design. It's also not good user experience. Like we would want to know, hey, this thing legitimately takes a while if you wanna create 10,000 jobs. And I'm currently at 1,000, 2,000, and 3,000. This is ideally how it would be, but that's something that would be complicated to implement with the current architecture. So of course we could start background threads, but just starting a new thread for these kind of things is not a good idea. I mean, the number of threads that we can start is limited. And that's actually something that the Python developers have also recognized. And there are as new now the async.io event loop integrated in Python, so you can do things basically in threads, but they're much more lightweight, so you can have many more threads. And overall, we should break the application apart so that we can create real salary workers for things that are going to take a while. So Galaxy, whenever something is supposed to happen, you do a request and you get back, either the response if it's immediate or if not, you get back an ID that you can pull and see what is the status or even better yet, we register something in the backend and you get an event sent from the server when something happened. So yeah, I mean, this is another example. So when you look at the Galaxy history, what the client does is it sends a lot of requests to the history content API endpoint. It includes timestamp and it asks, has any new data arrived or has any data been modified? And we do that, yeah, I mean, we do that a lot. We do that I think every three seconds, very periodically. That is traffic that just is pointless. I mean, it doesn't do anything and there would be much better ways to handle this. And that's also something that, by modernizing the backend, we can finally address. So one really cool way to handle this is subscriptions. Subscriptions are not really standard rest that they can be served by GraphQL. And web sockets. So Galaxy can just send out an event, hey, something happened and whoever's interested can listen to these events. That is much better than polling. And there are also other techniques that doesn't need to be web sockets. It can also be servers and events, long polling. All these things are more efficient if we can use a sync area. And then there is another thing that I think limits Galaxy adoption as a work engine for genomics informatics which is our API documentation is, I mean, I'm saying not great. I mean, it could be better, right? So it's relatively unstructured. Everybody's using doc strings, right? We have rock doc strings for our API routes. But a lot of arguments are also repeated so they're not listed in each of the doc strings. So I think generating API documentation from doc strings is inherently limited because they're not gonna copy paste all those arguments around. And then for the user, you read those API documentations and they're like, basically not super useful. So our stance so far has been, well, look at BioBlend docs, they are well maintained but they don't cover all of the API. Or you need to go read the code, which is not a great thing. People should be able to build Galaxy clients without needing to read the Titan source code. Yeah. And there's also not really any type of information on what the things that are returned by the API are actually like a common thing that we see is that there's a data set returned. But what is this data set? Is it the data set that corresponds to a data set on disk or is it the data set that is a history data set association? You know, these kind of things I think are for us developers that might be something we know we've learned, we've looked at, we've maybe seen the database schema but for somebody consuming the API, this is just super confusing. And there's no linking between the objects that are returned. So all of this can be addressed with open API. And open API is something that is really built in into a fast API. And so we get like really nice looking documentation. So this is actually, this is actually from a Galaxy instance that has already some of the migrated new routes. So this is already using fast API. So if at that Galaxy instance, you go to slash docs, you get the individual routes, you can expand on them, you can click them. You can try them, you can change parameters. So here it's a little, okay, this is better. It's a bit small, but here's an execute button and then you get the result back here. You can change parameters. You can see models. So fully annotated routes, annotate what they return. This is what I was talking about before. And it tells you what these things are and how they relate to each other. And I mean, there's a ton of advantages to open API. Another possibility is that there are translators. So you can take the open API specification and like automatically build skeletons of client libraries. This could, for instance, in the future, become the base for Biobland or for a Go client or for a Java client or whatever people want because there's a huge amount of available languages that can clients in different languages that can be generated from this back. Yeah, so another reason why, so I mean, I guess the question now is why Fast API, there are other ASGI frameworks. And ASGI is the async equivalent of whiskey, the previous standard that we've been using. So the documentation is really excellent. If you go to the Fast API homepage, so I've just taken screenshots here from the left-hand panel of the documentation and this is great. You can read this, this is interesting. The examples are well highlighted. We can really follow this quite closely. And that is something that we couldn't possibly do for ourselves in that much depth. I think, yeah, we don't have the resources to do this. And I think for onboarding new developers that want to write API endpoints or improve something or add new functionality, this is like super important that there is good documentation. And it's extensive, like there are things that are common, there are things that are not so common. There's a full section on security, on WebSocket, on GraphQL, and all of that is, you know, I mean, they give you the source coding, you can just start it up in a small server. I think this is really, really the main thing of, like why I think Fast API is a good idea. Yeah. So how do we do the migration? The Galaxy API, I mean, it does a lot of things, right? So we couldn't just replace everything at once. That's, I mean, maybe it will be feasible, but it doesn't seem like a worthwhile project. Instead, we should, you know, build route by route, go look at these things. And there are basically two ways, okay, let me back off, that was not a great start. So the most important thing is that we can do step by step. So I'm showing here sort of the core of how we started a Fast API app. So we pass in the old Whiskey app, we start Fast API instance, we add some middleware, and here's a Whiskey middleware. So this is part of Fast API. So Fast API can then serve a Whiskey app. And now what we do is we walk the module where the Galaxy API controllers live. And whenever there's some object called router, so that's typically how you call individual parts of Fast API component package module. So yeah, I mean, these are instances of a class code API router and you're supposed to call them router. So if there is a router, we include it in the app. And because we included it last, it overrides any previous route that is registered through the Whiskey app. And that's it, that's the foundation. This means we can really go step-by-step and add new controllers. Or I mean, we don't need to migrate everything, we can also work on the routes that we want to modernize, where we want to take advantage of async capabilities where we need web sockets or GraphQL or other things that are not really fitting within a Whiskey framework. And then, yeah, I mean, so this is already in Galaxy. It will be part of 2101, but if you want to work on Fast API, look at the Dev branch, of course, everything should go to the Dev branch. This was going to be more up to date. We've updated the API guidelines with some explanation and some philosophical aspects. So this is a good thing. If you want to start hacking on Fast API, have a look at the API design guidelines. And yeah, I mean, Nikola recently added the regular building of the documents. So this is also up to date if you go here. Yeah, so how do we migrate? One core thing is that we want to remove as much logic as possible from the API controllers and instead move that logic into smaller functions that are living in the managers. So here's an example. So we need to keep those old routes around, right? So people are currently still starting their Galaxy service with Whiskey. We can modernize routes while keeping the functionality the same. And we can do this by pushing everything out of the API controllers and into managers. So here's an example where some of that custom logic has been moved out into controller, giving the much nicer, smaller function. And here's the new equivalent. So yeah, I mean, this is kind of the way that you can build Fast API routes. The only way we differ from what you typically see in the documentation is that we use a class-based view. And the advantage here is that common arguments can be stored as, or common dependencies can be stored as instance variables. Class variables, I'm sorry. Yeah, then you add your configuration for Fast API. So this is a decorator coming from Fast API. You annotate what is being returned. I mean, that's one way to do it. This is another way, and actually we could remove that particular thing. So that was maybe not the best example. But yeah, I mean, this is all there is. So we are currently migrating to Fast API, but this sort of thing is generally useful. It's really small. It returns a standard model. And what has been returned here are pedantic models. So these offer type validation, conversion coercion. And we'll see more about that later. Yeah, so another core thing here is that a lot of the functionality that we want is based on having pedantic fields and models. And models are basically a collection of fields or models. So models can be nested, of course. So your history can have data sets that can have metadata and so on. So here is an example of a field. So this is probably the most commonly used field in the Galaxy API, which is the database ID, which is a hash of the numeric primary key of the object in the database. And we can associate certain things about it. We can say it's a string. We can say that, well, it needs to follow. It can only have hexadecimal representation. It needs to be multiple of 16 characters. And anything that doesn't fit this thing will be denied by the API. And you will have a reasonable error message saying, why did this thing not work? And also, you can try that out in the documentation I showed earlier. Just put in anything that is invalid and the API will tell you it's invalid. Well, we have a lot of routes in Galaxy that currently you put in something invalid and you either get your own result or you get internal server error. Which we see a lot with bots and stuff which is automatically hitting our APIs. Yeah, and so on the right side, here is a model and they're really easy to construct. So each field here, you list each field here and you say what type it is. And then you can add extra metadata. And that metadata will be available in your documentation. And this becomes also more interesting when we have linked models that interact with other models. So that the developers are not completely lost. I mean, consumers of the API are not lost in what these things are. So if you wanna get started, we have a couple of different ways in which you can currently start Galaxy under fast API. I kind of like this variant here, Python scripts, FAPI. You give it path to the config. That's not actually a flag, that's my comment here. It is great for development because you can just stick that into your IDE and start Galaxy in your development environment. There is more standard ways to start Galaxy. So UVCorn is kind of the recommended driver by fast APIs. So you see that in all the examples. So this is what you would replace the example in the fast API logs with. This would be how you start the Galaxy app. And that's important because this is also going to be the way that in production we're going to set up Galaxy. Of course, at a higher level, this will either be managed by the RunSH script that we currently have or something else. And then have a look at an example PR. So I just did an example PR by David Lopez from the Freiburg team. He's new, he's done an excellent job on starting to convert different routes to fast API based routes. So this is a great example. Yeah. John, do you want to talk about this? I can, yeah. Want to is a strong word. So I threw a couple of slides together 30 minutes before the meeting started. Just about typing. Well, maybe we should stop there because I guess we're going to get into backend architecture stuff. Is there any questions about fast API? I guess these are supposed to be fairly interactive. Yes. Does anyone have any questions about the controller level above or the server components or API stuff? Sorry, so this means basically you're one or the other effectively in terms of APIs, right? You're either old or new. Yeah, so if you start with Uiskey, if you continue to start with Uiskey or paste or other Uiskey servers, then you continue to serve the old API, but we try to have the old API and the new API share as much code as possible. We have an intermediate layer that takes the pedantic models and pushes them out over the old API. So the behave similar, but you're not getting the new ASGI features there. Yeah, but the other part of that is that there are like all of the quote unquote new APIs have two implementations, right? So we made the implementations as small as possible, but there are two sets of controllers for each of the new APIs. Yes. But then when you do run under a fast API, you have the old ones and you only serve the old ones because there's not what you want. Oh, okay. I see. So effectively you can bring it in as you implement things, because I'm just wondering how this works with the web stuff. So the web stuff will use the old API until the new one. So when you start up Galaxy in one of these ways, I mean, the UI doesn't care how it contacts the API. So, you know, it's just the API that decides, I mean, the backend that decides how it wants to answer. And the specification are the same between the old and the new currently. Oh, okay. Yeah, that's actually another reason why we, why you think it's a good idea to document stuff more clearly because with an open API, you have some standard ways to deprecate routes and arguments. So once we have that in place, we can also more aggressively change some things that are not great. Like I was showing you this kind of thing where we have Q update time, and then you need to go to the QV that corresponds in position to the same thing, which is like not very easy to build in whatever client you are, except maybe backbone for which it was written. But probably not even there. So I had, sorry, I had a question, Maris. Is it, is Unicorn communicating with the fast API through ASGI, right? Is that correct? Yeah. And then ASGI, at least fast API wraps the old you with, sorry, the with the modules and serves it and we can do async as well. Is that correct? Well, not in the same route. So a route is either either sync or re-sync. All right. And currently all our fast API routes are also still sync or, well, that's not true, but almost all of them are still sync because we're using SQL alchemy and we're using the ORM of SQL alchemy and that's not yet compatible with async.io that's gonna come in the 1.4 release. Once that is out, we can use the async mode in the async controllers. So does that entail another rewrite or just inserting async a rate all over the place? That's gonna involve putting async await where there is something with a blocking code. So, you know, when you fetch models from the database, you have to use await. But the pedantic models, so the pedantic models can also get things that are coming from the database. So if these are complete, they replace the serializers we currently have that you see ORM and so they get around that problem and should also be more performed because there's no more lazy loading and only fetching of attributes that we're actually going to use. Thanks. I mean, that's in theory how it should work, but we don't have an implementation yet. So I have a question. So it goes a little bit in the same direction. So the client routes will be replaced. There will be the same routes or there will be new routes for the fellow. There will be the same routes currently until we kind of like reasonably saying bye bye, you whiskey. Everybody should be using an ASGI driver. When that is the case, of course we can start adding new routes or new mandatory routes, right? I mean, of course there's a possibility that we have the client check what the server can do and then use maybe the new cool thing if it's there and the old thing, the fallback if it's not there. So for instance, if we add a history update subscriptions, we could of course check is that possible with this server? And if not, we use the old pulling mechanism. But after the migration period, we would remove that here. And another question is, would it make sense to kind of transform the models first to pydentic and then work on the API or is it better to do both like successively like step by step? Both make sense. I don't think you have to decide. I mean, there's many places where we can use pydentic models that don't necessarily pass over the API. So yeah. Yeah, I also mean like for the database models we have right now. So we have this in the models, we have like this init.py I think everything is in there. And I was wondering if we should or could replace that file first basically and have everything all those database models, but maybe that's just technical how to do it. I was just curious. No, I mean, it's an interesting question because we also got into the situation where we wanted to serialize a few things from the database without having access to the database, right? So that's for instance, how the new metadata generation works. And we, you know, we wrote some custom to dict methods that just take the database model and turn it into JSON. So I think the right or like a better way to do this today if we have pydentic models is to use those, right? And I think we will continue the need to have the database models and to have the pydentic layer sit on top of it because there's different requirements from the pydentic models. So for instance, you may wanna serialize just a subset of fields where you don't wanna include the relationships. So then you can build models that only include parts of what you need to return and build them together. So I mean, I think it's actually if we can replace the serializes that we currently have which are just taking the database model and then requesting one attribute after the other. And if that attribute is a lazy load attribute, it will trigger lazy load for each attribute. I think that would be a big advantage. And the other thing is that we can use those models for, you know, getting things out of the database and into the database and validate things before as well. Am I understanding that there's a way to like connect SQL alchemy to a pydentic directly bypassing our models though? Yeah, I mean, you can get any, you know, anything that can be serialized as JSON, sorry, can you ask your question again? Well, it sounds like, I mean, I generally agree we probably need to preserve the model layer or what I would call the, like Sam, the in it got models layer, but is there a way to just sort of in pydentics to say, grab this out of SQL alchemy and sort of serialize it directly? Like, is there a way to bypass, you know, what we're doing inside of our models or like performance or something? This is kind of what we're talking about with the GraphQL layer, right, Marius? Yeah, yeah, yeah. I mean, so, I mean, pydentic has this ORM mode. So then it just takes things from the instance and my understanding of this is that it doesn't need bypass the ORM. So I'm actually, I'm sure that it bypasses the ORM because in the async example in the FEST API docs, that's how it's done. They still use the core of SQL alchemy and they use the model definition, but they don't use the ORM. Does that make sense? Yeah, that makes a lot of sense. That's cool. So again, I mean, the, you know, I really like the FEST API docs because they talk about these things, right? And like they are not, they are not books. They're not super long and not super detailed, but they are enough to like, yeah. It's somewhere between inspiration and instruction. I think they have the right length to look into these things. Yeah, I mean, I've been hacking on this and I want to like, I do think to Sam's original question, should we be working on the back end first? I mean, I don't necessarily think we should be replacing the models first, but I do think that there's a lot of value in basically taking all of our controller code and making it pydentic for that layer that says, here's the thing we're sending off to the user. So replacing all the serialization code, all the code that's in controllers, we can push those down to models, push the views into pydentic, do the input validation in pydentic. Like I think I'm getting nervous that we've converted a lot of routes and we're getting a lot of duplicated code and we still are a release or two away from like ripping out UWSGI. So I do think if it were just up to me, I think we would be putting a little bit more effort into getting that back and ready, making those controllers all like really as thin as possible and ready to just swap over. But that's just my bias. I'm obviously not sort of eagerly waiting on the open API docs. And those will be amazing, but I don't use them, right? So I guess that's really the benefit that we're getting in the meantime by thinking the two layers or the two different implementations is that we've got those nice docs in the meantime. So that would be, you know, just as like a playground experiment with it, but I do worry that we're gonna get to a point here where we've got too much duplicated code. As thin as we're trying to make it, it's still a lot of duplicated code. But yeah, that'll be my sort of alternative answer to that. So I mean, you also mentioned that we can only start, you know, really ripping out serializers once we make the switch. But you know, the way we do it, we should be able to do that today as well, right? Oh yeah, no, I put all the, you know, you can return gigantic models from our legacy framework. I mean, I added that. I thought that was a really cool idea. We've been doing that forever. Like, so you can take all the like existing controllers and basically make the backend look like it will be looking for fast API. I haven't opened, yeah, I mean, yeah. So I think that's a good approach is to like sort of start at the layer just below the controller, right? Okay, we do payload and keyword arguments very ugly in the API. I don't even, I mean, I've written dozens of API endpoints. I still don't quite understand when we're getting keywords and when we're getting payload and when payload is a keyword. Like it's really ugly and fast API is much better documented for all that. But I mean, if we could start at that layer just below it and get all that stuff ready. I mean, I think there's some examples now where we can sort of see, like if we get the controllers to look like this, they will work in either framework just fine. And I think those examples, I mean, I completely agree with you and we should definitely pay a lot of attention to this during the development process. I think the nice thing about also doing the fast API route is really the thing that we can get outside contributions this way because you actually do something, you can see something as a developer, you can say, hey, this thing now appeared because I did this because I actually went in and documented the things that need to go there. That's sort of my idea of why we should already start doing fast API routes. But I also agree with you. I mean, this is not exclusive. Yeah, that's a fair point. I don't disagree that that's very nice and it is very great to get that sort of visceral, like, hey, look, the documentation for Galaxy now works. That's awesome. Yeah. Yeah, I mean, I wrote a bunch of models for the refactoring API, but I didn't change the top end. And they do make the Sphinx docs a little better. They're not like, it's still not good, right? But it's, and there is a plugin to integrate Pydantic into Sphinx, but it doesn't work very well. But yeah. So the other thing is that when we add new routes, we can already use the proper way to do things. So for instance, I've done some work on adding this workflow in location detail component. So you can see now the workflow in location details. What is sort of missing is filtering invocations. So you wanna look at like ordering vocations. So you need pagination, search, and all these things. So these are all parameters that have not been there previously. So that is also an opportunity to add new things in the way we really wanna do them currently. And then the same way I was talking about adding GraphQL and web sockets. So these are things we can start adding them. So that doesn't need to, you know, wait at all. And I guess one conclusion here is that we should push for getting rid of you whiskey as early as we can, right? I mean, it's not fair because I've said this for years, but yes, me too. And it's also not fair because I think our admins are not in on the call. Yeah, it was Nate. The perfect venue for determining that. Can you talk a little bit about like the plan for like message queue and like asynchronous tasks? I mean, I'm trying to help with the effort, but I still don't see, is it like requiring unicorn or like how does this get us closer to having workers or is it a sort of a parallel effort? I think, and then correct me if I'm wrong because I think you looked at this longer than me, but one of the things is that we traditionally didn't really have a good way to start workers because, you know, we still wanted to have that run as age experience where you just step run as age and Galaxy is functional and working. And so I think one part of the solution there is to decide on something that can manage these processes for us and the other thing is, yeah, I think that's the main thing actually. I have a PR on Galaxy that adds a really stupid way of starting a salary worker within the Galaxy process, which is of course not a good idea because we shouldn't get into the business of handling processes, but if we can find something ideally Python based so we can pip install it that can handle the basic requirements of starting processes in a coordinated way. I think that gets us pretty far. The other sort of, let's call it potential blocker is, we didn't use to have this dependency on for instance, having a real AMQP server, RabbitMQ or I don't know, SQS or whatever salary we can use. So that's I think another challenge, but we've always required, for instance, PostgreSQL to set up a production Galaxy instance. So maybe we have to say, we now need RabbitMQ. I'm sorry, go ahead, John. Oh, no, no, I had just another question Oh, I was gonna say, yeah, that just about sums it up. The crux of it was we had no way to start a real process manager. So that should be, yeah, that's step one. And then once we can start a salary worker or a number of salary workers, there's real value and actually decomposing stuff until then not near as much. We don't have to be stuck on RabbitMQ though, I guess is what I was gonna add there. We could look into other transports that we wanna use, whether that's zero MQ since that's what Circus uses or any number of things. So I want to add here that I think zero MQ is fantastic. It's like super easy to set up, super easy to control. And I have a WebSocket subscription endpoint that sort of works. That's why there's no PR yet. But like, you can open a WebSocket in your console and you can see when Galaxy finishes datasets. And that's just adding zero MQ messaging whenever we also write things to the database. And that works across all processes because it's zero MQ. I think the problem with zero MQ was securing it. And that's, again, that's also the problem with Circus. Because they don't really give you a ready to go recipe for securing zero MQ. And that's probably not a problem for 99.9% of instances where hand is just run on a single machine but like use Galaxy.org runs it on multiple VMs. And so we need to figure out how we wanna secure this. Does answering that question limit us in terms of which, how we run Galaxy? Like, is this gonna be like, we're gonna want something that runs Galaxy that's also a process manager like Unicorn or do you think that Unicorn could also work? I don't, I mean, I was looking around and I didn't really have the impression that people use Unicorn to start salary workers for instance, whenever you look for that, people are using Supervisor. But Supervisor seems to be quite centric on this approach where you have it installed system-wide. And I don't know why in the past we haven't gone down more of the Supervisor route. So maybe I should add Supervisor here. Yeah, can we just run Supervisor in user space and just you get that with Galaxy? Yeah, I mean, that would be another option. I don't know how people, for instance, the cloud team would feel about this or how the admins feel about this given that they like SystemD. So that would be the advantage of Chaperone which has a SystemD interface and understand SystemD signals and these things. We should just start a big collection of options and start adding data on a big issue, I think. Yeah, and I mean, these things are not hard to set up. So maybe we should just, I mean, there's an issue. Hey, I want to link the issue, but there's an issue on the Galaxy repo where Nate sort of specced out what we should be able to do and I don't suspect this will be a hard job just trying how that will look like with Circus and Chaperone and Supervisor. I get the idea that with Circus, it will probably be possible to stuff this into Galaxy YML which might be nice, but might also be a bad idea, I don't know. But just having a single config file might be nice for our users, it's like happens. Yeah, shove it all in pyproject.toml. So I mean, at least for the Kubernetes case, I guess it would be preferable to have Kubernetes as the supervisor without having a separate process, right? So there is that. And I mean, I was just wondering, is it an option to just have a simple, like just if you do run SH, you just use an in-process salary worker which you kind of start manually. And then for a more production grade one, you have to start them up separately anyway, so. Yeah, I mean, that's kind of the route I went except I started the worker during the hand rest setup. But yeah, I mean, that would be another option, I think. Although, you know, if, so the cloud setup doesn't necessarily need to use this, right? So if we integrated this with run SH or whatever successor to run SH, you know, that's just the very light wrapper and coordination for starting stopping processes. So yes, of course, Kubernetes could bypass that and go directly to the code that Circus or Chaperone or Supervisor would manage. It does seem, I'm trying to think about, it seems like we lost some time and some development effort, maybe even the significant amount, like investing in UWSGI messaging. And I'm trying to think if there's like lessons we learned there and maybe, maybe, and I don't know what those are, but maybe one of them is we spent too much time trying to get run.sh to be too production-y. And so, you know, maybe something to keep in mind going forward, but that might be the wrong lesson. The wrong lesson, the lesson might have just been UWSGI is, you know, not a good piece of software. I think we also put our eggs in one basket and when that wasn't really necessary, right? So this in-process communication, we went with the UWSGI protocol of this messages that we could have easily gone with zero and Q, for instance. And then, yeah. I think that's a good point though. John's and before that there's no reason just run.sh has to be anything, but it's sort of a functional development setup. It doesn't need to be production-y at all. Yeah. Yeah. Hey, Marius, can I ask a question real quick? Sure. You said you had sounded like an experimental branch where you were able to open a web socket and sit there and watch datasets finishing off. Do you, could you point me to that? I'd love to take a look at that and just see how it all kind of fits together. I'll do it after the call. Excellent, thank you. I mean, it's like, I'm not sure I put it on the right place. It's probably all horrible places, but like, as an experiment. I just like to see how the parts connect. You know what I mean? Sure, yeah. So another branch that is probably very similar to Daman's branch for GraphQL. So I think this sort of thing probably goes together that we manage the subscriptions with GraphQL because GraphQL has language for doing subscriptions, which I mean, there isn't really a standard in a regular HTTP API. I mean, not that you couldn't do it, right? But I might have to. You don't want to replicate Q and QV in GraphQL. No. Do that again? Exactly. I mean, that's exactly what I meant not to do because there is like this standard way of doing it in GraphQL. Yeah. Is there anyone that would like to start working on this? Are there questions on like how to get started? I do have questions on how to get started, but I got here late and I got a feeling if I rewind this video, my answers will be there. Yeah, it did. There's also this like slide just, you know, yeah. It's all right. If somebody's recording, I'll just watch it again. Yeah, it's being recorded. I will forget to publish it for two weeks and then give it to Dave. But yeah. Sometimes in March, you'll be able to see the video. Well, sometime in March, you'll have your PR. Yeah, I mean, of course also, you know, if you have concerns about this, then let us know. Also, you know, if you don't like the approach or you think we should do something differently, like now's the time to do that. I just- We're still trying to figure things out, so. I just love to get an experimental subscription that replaces that polling that I do. You know, I just love to try and generate one of those. I do not like polling either. I noticed that one of these, one of the slides that was up here when I first came in was like, we do a lot of inefficient polling. It's like, yeah, we do, we really do. And I would love to eliminate that. So I don't know if there's no other questions, and I guess I could say we need more people to present. And not everything needs to fill an entire hour. So if you have small projects, get in touch. But do you want to present or, you know, you have something that we should be working on or you want to make a case for something or against something? And if it's just five minutes, that's totally fine. I mean, but also there are people who have requested to present and we've had like several full sessions and it's kind of like a boom bus cycle. It seems like, well, weeks were like, ah, you know, we haven't got nothing and then several in a row, but yeah. So apologies if you're hearing that and you're like, I want it to present but we're gonna get to you, but also we need more. It is Janet. Sorry, if we've got a few minutes, can, and since you got Janet here, it doesn't even mind if I ask some questions about the format of the Galaxy workflow files. You've got six minutes. Go ahead. Okay. So I'm trying to, I'm dealing with some bugs on the IREDA side. And IREDA parses the galaxy.ta format and converts it into its own format and then yeah, link holds an interface to Galaxy. Okay. And so things seems to change somewhere around Galaxy 18, 18 points something. So, and I'm busy fixing things up at the moment, which is forcing me to learn closure, but where are things going with, because I don't want to like fix everything up and then format changes again. So where are things going with the.ga format and what can we look forward to in the future? Um, I would say that Galaxy is going to continue to support the.ga format for the foreseeable future. Now we might, you know, we can now consume .yaml files, if you want to build GX format to stop and there's a schema available to see what that looks like to some degree. We use that extensively for testing, et cetera. Plenimo supports it, but as like, I mean, reproducibility is our thing, right? So we're gonna continue to support the .ga format indefinitely. And if something, I mean, apologies if something in there broke, we try not to break that in big ways or really at all. Sometimes there's some things we assume are non-functional but turn out to be functional, you know, depending on how people are consuming it, just cause like something's, you know, not being used when Galaxy reads the file back end doesn't mean, you know, someone wasn't reading it in a certain way. But I mean, I would not worry about .ga files disappearing. We'll continue to be able to consume them, I think. Even internally, the way we implement, you know, GX format two or CWL also is to convert something that looks native, you know, looks like those native models in the database internally. You know, we take those external formats in, we tweak them to make them basically extended .ga files and then we consume the .ga file sort. I mean, that's a very hand wavy thing. That's not what really happens, but that's a sort of approximation of what happens. So I wouldn't worry about, I mean, I would feel comfortable continuing to support the .ga files and fixing whatever that issue is there. Yeah, we would like to have better formats for, you know, reading and consuming and writing these things, but yeah, you can't go over, I mean, going through the miserable experience of dealing with those .ga files is gonna continue to work. So apologies for how ugly they are, but it's gonna continue to work. Is that a sufficient? And is there any documentation about what's meant to be in there? No. I presume no. No. Any way to validate one, anything like that? Plenimal lint will validate it sort of. It will catch big issues, big glaring problems with it, but yeah, that's about it. And it won't go through and validate tool state and stuff. So if you've got a tool ID and it's got, it's contents don't match what the tool is extracting. It can't catch that. The only thing that can catch that is loading it into a running galaxy with those tools loaded. Okay. I guess we need probably to end on time. So thanks everyone for coming. You in two weeks. Go paper. Hi. Hi, everyone. Thanks, Mary. Hi. Thank you. Thank you. Thanks.