 Should I start? Go ahead. Basically, a little bit of history, I guess, when I'm talking about this browser development environment continuing on the thing that I presented at the roundtable in October, I think. This is a little bit of history. This started as a co-fest project between me and Mohamed Safadi, who was our summer internet Hopkins last summer. It continued as my pet project, mostly because I've been using it for my own ease of development. Then around, I think, January, February, there was Goldsroom and the folks at PSU that were working on the GPU support on Kubernetes and to basically ask what the easiest way to change things in the Kubernetes owner and test things are. So I kind of formalized some of the commands that I was using from my notes into a few scripts and gave those to her to be able to create a Galaxy branch, change code in there, and then deploy it on Kubernetes without having to worry about linking anything. And then basically this week, I took those scripts and then basically lead the chat ops from last fall to kind of have the more modern and extensible version of what I'm going to show today. And then basically what is it is deploying Galaxy on Kubernetes is the base of it and then a few tricks to be able to apply modified code without building the image. And the goal would be to allow developers to test and share live instances with dev code and also opening the door for development. So I'm going to start with a live demo of the mid disorganization on just to kind of mimic Galaxy project. And then I opened the PR the PR is changing in the communities that are just adding the debug statement. So I just wanted to test this assuming this is a PR that I want to see say deploy. So a few things from October is that the bot will respond. So when it sees this and then when it deploys it, it will react to it. And then it will also see so that just the signal that it saw the command and that it deployed the workflow that is the deploy workflow. And then if you look in actions, you can see this dispatch command that looks at the comment ended and dispatch the actual workflow. One thing to note here is, there's a list of commands that are accepted. And there's a scope for who's allowed to do them. Right now, the scope is that anyone with late access to this repository would be able to dispatch commands. So this can be customized so it can have different flavors of commands one for committers one for public people for example for the time limit of 10 minutes on public instances and leave it like for two hours or a day for developers or stuff like that. But anyway, so after this is done, the, the actual deploy command is dispatched. And this is what actually the place Kubernetes, I mean the place the galaxy on communities. The first comment is when this workflow actually starts going. So the point of this versus this is mostly that there might be a queue. There is a queue. There's other tests running and this workflow has to wait after being dispatched. I didn't want this to be misleading and for the user to just be waiting there. So I added just a small comment when the actual preview workflow starts just to know that in about a minute after this comment or about a minute and a half after this comment galaxy will be up. And that should happen soon. Any questions so far while we're waiting for the thing. I guess I could go and answer them actually show what's happening. I got a question list of available commands. I will go through that afterwards. Yeah, I mean is there a way to make this like a drop down or some way where it's like you don't have to go to a look up table. I mean I could add like a help function that would just print everything. That would probably be the way to do it. I don't know like a drop down. I mean my hope is that if people actually start using it that it will become kind of a habit. And something about the deploy command is you can add arbitrary settings so if I do. Alex equal I think I called it x dogs equal set CDMFS dot enabled equals false so true or whatever it's like this will actually send that to the home update command so you can customize things like that. And then one thing that I want to add is can do your IDC deploy CDMFS just the most common things. So if you want to test your IDC it will automatically create a client with key cloak and then it will make the whatever IDC can fix to be able to communicate with that we can do the same for Samo and stuff like that just to have testable things that are preset. Yeah, I don't know how to make like a full on drop down and drop down is kind of difficult I guess given the GitHub chat restrictions but some, some indication of what's available. So you can see here that after deployed it it basically plans what Galaxy Helm plans at the end from the home install. And then one new thing that I add is that it also just sub logs. So one thing that was always common is that it's useful to have the logs especially if there's a problem and start up with debugging something like that so just an automatic thing it will put all the logs from the system into just so you'll have job handler log or job handler log web handler log. Like it's Bell. It's there. workflow handler and then also on top of all the handlers I also grabbed the events. So that's just for the actual communities events to see if you want to see timing of how long it took to pull the image or anything like that is just the timing of when all the parts started stopped images were pulled just all the events and communities. So that's deploying. And then I guess so assuming I want to test so back to this PR. This PR was adding a debug statement in the community center so I want to test that this actually works. I go into the galaxy that was just deployed. I upload whatever data. Start. I go and filter. It's manipulation place somewhere. So I have the two jobs that are in here. It's already made to run. But again, it can be any configs can be changed. So one can change the job constant somewhere else. If you want to test the local or anything else but anyway, I ended two jobs. So now I want to see as quickly as possible whether that's worked. I do tail logs job so that will tell the logs of the job handler. And two commands that I added this week from, what was that in October was tail logs and just logs, tail logs tells the last like 100 lines basically of what you can say tell logs all or just logs all and it will do all of them or you could do job web events workflow to get the specific one tail logs will apply with a comment of that tail just logs will apply with a link to the gist. Yeah, so Yeah, I can I guess. Um, there we go. Yep. Um, there we go. So it's applied with the job handler log, and you can see the debugging testing that I printed in this PR. Um, Yeah, and that's valid. I mean, if I want to get more things I can do just logs up to get the whole thing. I can do If you want to change code you can so I can, for example, just come here be like okay I want to print. Instead of just printing here I want to print it three times. Update commit the changes and then I just go back to here. I say deploy again and the full update the code to be the new code. Um, so then that's how you can iterate on just changing things when I do another you learn the job, we see your logs and then you do everything and get help basically taking communities for granted, not having to worry about setting up all of that and then also just having like galaxy instances. Um, So this is applied to just log from here. Um, but yeah that's basically it and then the final command is head down, which is just to kill it. Um, yeah, any questions. Yes, I have a few questions. The Kubernetes cluster that this gets deployed to is it configurable. I mean it is. So, like let's say I have a local Kubernetes cluster can I deploy to that. I mean in theory yes so I mean as long as GitHub can communicate with it yes so the way all the configurations that I needed to set this up in any repository or this token is just a token for the account that should apply to the comments. kube config is whatever kube config pointing to a Kubernetes cluster, and then this is just the domain that you want the thing to be at. But beyond, like, as long as you have a valid kube config it can point to any cluster. What's currently being used is a jet steam cluster that is running cloud man which allows it to auto scale. So the theory, I guess the potential plan would be to leave a tiny head node always up. So it doesn't cost that much it's like a tiny node and then when people start using it it will auto scale so if there's 10 people using it, it will scale up three nodes for that time and then it will scale them down and people are gone. So at least for the common usage, we can probably have a jet steam cluster that's always running for everyone to share, but then also if you want to do your own like heavy duty development you can also just put up your own cluster and then just like for Galaxy put a separate kube config in your repository and then just make a PR to yourself, and then you can run this on your own cluster as well. So if you don't want to deal with chat ops but want the same functionality directly on the command line so if you want to keep coding in VS code or whatever you're coding, but then be able to deploy in Kubernetes. There's also the reason why for extracting all of these is to be able to do that. I love the script for how I'm update sim link branches the sim link galaxy into this setup then just to just set up galaxy and galaxy helps them scratch update links to update the sim links between the galaxy depository and the, because galaxy help needs the files in its own directory so what it does is some links galaxy depository to galaxy help to be able to deploy the new code. So all of this is automated so that you know with three bash clips, just running the command update links. And then just branch namespace, it will set up the values file sort of everything for you to be able to do the same as from chat ops, but from the command line. And obviously I need to document and clean these up but. Yeah, so the plan also be able to do it from the command line if. So the GitHub thing is a convenience thing, but the actual underlying part of being able to develop on Kubernetes that is not related to just GitHub. Basically, a layer on top, but the scripts themselves can be used. Command line. Okay, which, how do you tie this to a branch of galaxy that you're doing development with solicit. I created a fork, I'm working on a buck fix, and I want that to be deployed. So this, you'd get clone galaxy helm as well because you need that to and I haven't done the installation of linking only one, but basically just do bash some link my branch path to your galaxy directory, and it will automatically set up. So this repository, it's not very smart it expects links to be on the branches directory and it expects a certain structure. So that's why this sim link command is supposed to just create that structure without just by sim linking things. Um, so you can just clone this repository run some link pointing to wherever you had your galaxy, and then you run update links helm update and you can deploy your current branch of local dev code onto Kubernetes. Okay, and finally I'll be quick. Is there a read me so I can repeat this on my own. No, but I should make one and I do have the commands that I'm running myself like for examples, but yeah, there will be one by GCC. Um, okay. Oh, anything else is going back to this deployed and you can see it says has been upgraded not installed because it already was existing. So just if I want to check that the new code is there I can leave on a job and just check that it's three times and then go. And then just tell jobs job again and then it will happen to three times I'm not going to wait I'll come back to it in a second. Yeah, so that's basically it. Um, one small comment is that. So what I did to make this work. One problem is that right now, whenever you push something there's already all the tests that are running and they clog up the queue. The because the responding to this is itself a workflow it's, it runs in like 15 seconds, but it will wait in the queue if everything else is running. So I actually took all the tests except linting and put them on the slash test. Um, so that the tests are not running automatically under the commit it's saying slash test on them. This might not be desirable is just this to not clog up the queue here. It's a little bit less of a problem when it's on galaxy project just because we're not really making PR is from the interest it's always folks so this can be run on the galaxy repository on that queue and then all the tests would be running on the whatever space action space, right because that it's on push. So we can disable testing on PR is and only have testing on push. And then that would make it so that the push is done on their depository and then we keep the queue on galaxy project for this, and then also have the option to lean on them on our side with this for that's an example of how to do it because the point is just that the queue doesn't need to not be clogged up because if you're waiting for like 10 minutes for this to even be read then it's less attractive or less desirable or harder to use. Um, yeah, that was the comment and yeah here testing three times so that it updated code and then when I land a new job, you can see the new code. Yep, that's kind of it here. Any other comments for one. I mean I think from the perspective of like a poll request reviewer. I think it's fine to wait, like, maybe it's been a day since the last commit and I want to try out the code in the PR. But it doesn't seem to be a huge problem I mean, if we can fix it like you're saying that's great but I don't know that that needs to be like a blocking issue to get this to onto you know, onto the main, the main branch, because I think there's a lot of value just even with the wait time. Yeah and so that was going to be my like that was going to be saw how to get this onto the project galaxy. Yeah, I mean basically, all we have to do is one. And I'm assuming it's okay but just get an actual okay that we can set up a cluster and jet stream that will always be running and just decide how many nodes it can have. Other than that it's just setting up those secrets in galaxy depository and then I mean, yeah, I can squash these commits but there's like 11 commits that are all in GitHub workflows. Also just adding three workflow files one script, and then the secrets. That's, that's it to get it working on galaxy project I think. And one thing is it would only work for committers at the beginning, because right now it's made so that you need light access to the depository. So, and I would feel decently comfortable enabling it like soon ish for committers for actually a public facing things where anyone would be able to do it I would give it a little more time just to make some like cleaner the presets and stuff like that and tear down and stuff. Um, but yeah, I, it should be fairly easy it should be just one PR to get it onto galaxy project galaxy, or we can set up a cluster that's always running. I can also, honestly, I could, and I guess I have this in a slide but I can also make a separate depository for deploying the cluster itself, like with chat ops that way, if somebody just wants to make their own cluster, instead of having to get them and do ansible playbook themselves it would just ask the chat for it for the, like what Kyvan asked about setting up your own cluster type thing. Um, yeah, sorry I'm babbling, but yeah, it is possible. So you mentioned needing permission to get a standing cluster on jet stream. So, is that NS NS do we have permission to do that. All right. Awesome. Um, yeah, so actually, Keith brought up the attack IU divide like this week. So there's like the IU that jet stream and attack that jet stream and our allocation attack that jet stream has been pretty empty. I was thinking we could just allocate like 200 gigs of data and then like just a few nodes on there before we start using it all up. Um, and it can just state that as a standing cluster. Um, yeah. And I can do that like by just like by the end of the week if you want to. Um, yeah. In terms of tear down does the cluster clean it if someone forgets to do that tear down demand does it just keep running forever, or currently yes, however, I can add timers to tear down after, you know, an hour after the last commit or something. And I can add like, when you close the issue with the down automatic or the PR or tear down automatically, like now it does just keep it on. It is, I mean, the cluster is itself always running anyway. So if it's just one or two galaxies that are always that I stayed up for example overnight, those won't actually make the cluster auto scale it will just run on the head node. So in terms of cost, something like small like that won't affect it. It would start being a problem if it's like 20 30 of them that stay up for a long time then that would start a cooling node cost but or I guess credits anyway but yeah. Yeah, so we can add some time thing. Or like we can add something that's like, you know, every day at 4am or something clean I guess it's hard for time zones to have a time when nobody's doing stuff but anyway, yeah we can add time to clean up so something it's not there yet and it isn't slides about it needing to be added. Okay, I mean that that comment you made about closing it when the PR closes and that makes a lot of sense right. Yeah, and that's very easy that's just changing the on instead of just preview dispatch just on close issue so that yeah that's a super easy change to make it tell down on closed PR. So that wonderful. So what are the tools that galaxy has when it's deployed on this Kubernetes cluster is it like similar to main or is it bear bones. Yeah, we can. So I can do presets. Oh, I'm just going to jump ahead over here. I can do presets here to do, you know, deploys. Right now this deploy cdmfs would theoretically the default one that you're seeing right now it's deploying the cloud cdmfs depository not the main one. So it's the one with the tools that are tested on Kubernetes and the default one for galaxy home. However, we can just switch out the depository so and you know we were using main up until like a month or so ago so I could just add slash deploy main and it will deploy the CVM the tools CVM fs from main. Another thing is by default like now everybody's. I mean this gets deployed with a single user admin. That might be a problem and also like in general this doesn't have authentication I could theoretically, especially if it's just for committers we can all just have GitHub identities on the cluster and I can actually link this with key cloak so that galaxy would only be available for people with specific credentials. But for now yeah, you're also an admin so in theory if you want to test like a specific thing you can also just install the tool and test it. Yeah, it can be deployed with main tools that would be excellent. Yep, I can, I can add that as a preset and I can just put it here so that I don't. So I don't forget. Is it a security. So is everyone who logs into the galaxy instance and admin. Like now, well you don't even have to log in a single user so it will be logged in for everyone as an admin. Are we okay with that. I mean, that is equivalent to being rude right, but it's probably not a problem. Yeah, I mean I can. I guess there's that bug in OIDC login which OIDC login doesn't work right now. I can fix that and then I can just before we make it public I can just add to the choir login and for it to be through key cloak and key cloak to have a sign in with the hub and then I can make a list of all the committed emails and add accounts or something and yeah we can make it so that you have to be logged in and not the only people who can log in other people who should have access. And that shouldn't be hard to do can do it by GCC probably. I mean maybe it's not a problem right I mean you wouldn't have sort of the capabilities of the galaxy user in the system has but maybe they can't really do anything interesting anyway. So, bigger problem is I mean, if they take over the whole cluster it's also a jet slim cluster that's running on a pretty empty allocation. It's a dev allocation. So I don't think like there's potential for like huge leaks or huge problems, but I do think like if somebody really wants to they can use this to exploit like to get free infrastructure basically. I don't know if that's a concern or like if we want to just try it first and see if it is a problem and deal with it later. I mean, it's not hard to add like a small layer of protection at least so can do that. I mean I guess it doesn't. As long as we're okay with the. I mean I guess the thing I would care about is the credentials that we're using to deploy it but it sounds like it's a small dev thing so maybe it's not worth worrying about. Yeah, and there are no credentials to jet stream or anything like that is just the cube config. So for that leaks in any way, we can just delete the cluster altogether deploy a new cluster and that will be invalidated. So it should be fairly easy to reset that if it gets compromised. Um, yeah. Cool. Oh, anything else. I mean, yeah, I think it's fine to deploy as it is. I, if you do decide to like require login maybe just require login for the admin user right. I can make a special please set the password for the admin user that we just passed around between committers with whatever developers in galaxy and yeah all users would have access to it but then if you want to be admin you have to log in as a special user in galaxy. That is a simpler solution. I'm submitting a pull request couldn't I just modify the galaxy YAML to add an admin user as my self and get in as an admin user that way. Yeah, technically you can. But only only committers can launch the cluster right. Yeah. Yeah, so if you do the deploy command with, you can just use extra args to set anything so here you can just set single user admin user but like john said only committers will be able to dispatch these commands so But I'm going to go back to this slide for a sec. Just like as potential targets for this. So I, these are kind of like just the general groups that I thought about newcomers and like especially for paper cuts and stuff like that when there's a one line change type or type thing that kind of thing. And for people to be able to just go into GitHub edit file, make whatever change they want, test it, and then just be like, this has been manually tested and whatever and someone can review it instead of them having to do a local setup. So for the viewers to one to not be have to fork and whatever. Check out every branch to test locally but also to be able to share with each other so you like tried something you're like, oh, or like even the producing bugs, if you can open a mtpr with of just a comment change just to show a bug and then can make the bug happen and have the logs for it and then just share it with the developer in the PR itself and then whatever person is going to fix it can just go in there, make the changes try them. Give the instance back to the person who created the issue or whatever and then ask them, does this look fixed and whatever being able to share that instead of being a harder back and forth. So my developers, especially I thought that being able to share like a UI piece before it's done might be helpful especially with the sometimes a lot of design decisions that a developer might not actually care about, but giving it to test done with someone who actually uses galaxies helpful so that's one specific thing. For the communities projects. It's like, basically, if somebody wants to make a change in that he's done it now it's decently hard set up unless you know, all the entire stack. This would allow for people to be able to make changes and especially as we're talking about pulse like communities and all of that to be able to get people to not have to worry about the infrastructure take communities for granted and just be able to target that and working out so targeting that for jet steam that's an excuse for us to use our allocation more and the dev allocation to not be used by just a few of us but shared more broadly. And then for the just the Kubernetes team that we'll get a lot of guinea pigs and it uses galaxy helm so it's also like an intrinsic test of Kubernetes and galaxy helmet the stack so that's just a plus for having more testing for that. Yeah, that's, I just want to go through that briefly. So, I. Okay, I'm going through these quickly there's the X dogs where you can put arbitrary things to the helm install command. I'm going to talk about just the users in public. Any other I added main, as what Kevin said for main tools any other presets that immediately come to mind as would be useful. And just for the record, this is just setting special values to be able to be set with one word so it's super easy to add so just wish away. And it doesn't have to be now just message me if you think of a useful preset. Um, just logs tail logs. I want to find a better name because logs list has GSG tail logs has two L's and I don't like either of those. I'm thinking of just log peak log maybe. Anyway, um, yeah all events web job workflow can choose to log specific things. Stuff that are easy. I can add to be able to say the number of lines like now it's hard coded to 100. Because I mean, because it's a comment, I don't want to like leave it open ended necessarily. So I'm thinking like 200 maybe max. And if not you have to just it but I can also add a gap, which might also be helpful to, I guess I will add a gap before the tail but yeah. To be able to get up specific parts and have it in a comment instead of just think it and then gripping or searching and just, um, would that be useful I answered myself because it was useful for me when I was doing something. Any strong objections to using busy box gap. I know GNU gap has a lot more things in the pearl reg X is popular but busy box gap is there by default and yeah, I, I was just going to use busy box gap instead of GNU gap unless someone tells me not to. Turn it down. There's currently no options just you say turn it down and delete it. If there's any options that people would think of I'm thinking like you can say turn it down in 30 seconds or whatever turn it down in an hour or something like that just do yourself set a timer or something so you don't forget or whatever. And then yeah, automatically down on PR clothes that fairly easy and then time to down since last comment and then maybe introducing something like keep to reset the timer. That is possible it's a little bit harder just to figure out how to like how often to check it probably has to be a current job and stuff like that but shouldn't be too hard, it should be possible. Some of the immediate things. Cliently building requires young at least and maybe a few other things. The minimal image delete those things so I was thinking we should. Two ways to do it either take the minimal image as a base and then extended by just adding a few things like pseudo young a few useful things, especially for currently building but also for development in general or the first stage of the minimal image has all those things I actually don't know if there's a way to just extract that outright but they might be away and we could just take the layer before everything is removed. And then it should have everything that then Indians will playbook. Um, so there's just, it should be fairly easy to get to it, and I will enable rebuilding the client on the fly. Another thing which is actually not here I want to add deploy for deploy rebuild that will actually rebuild the image entirely and deploy the new image can be to the modifying code slaps the change code on top. That would be actually building the image for the final test or something like that. Um, yeah I've been using scripts and just command line mostly doing these things manually I'm going to try to start using chat ops myself as well just to get to give it more testing. I know Luke, we talked yesterday about you also using it for developing some things and if anyone else is at all interested in starting to use this earlier. I can either give you access to that organization that already made or I can help you set it up on your folks or whatever. If anybody wants to use this before it's live. I'll just tell me and I can set it up for you. Um, oh yeah and the scripts already work on the galaxy home level as well so this on the galaxy side it takes galaxy home for granted and would look for changes in galaxy helm on the galaxy home side it would take galaxy for granted and deploy changes in galaxy home so that way you can taste test both live so it already works I just have to add the command dispatches and stuff to. The galaxy home depository as well. There's the possible adjacent project so um, theoretically everything that's done here could be done not just for dev instances but also for something like workshop instances so instead of PR if we wanted to have a few people to have the right to open an issue for example and in the issue comment deploy with whatever endpoint and want and for it to just deploy an instance of galaxy on communities. With whatever requirements they want for that and then so basically a way to ask for a transient instance of galaxy that is quote unquote production ready. Some an issue or something like that the bot will apply with whatever the link is and whatever and then when you're done you can tell it down from there. Um, if this is of interest, it should be fairly easy because it's it's easier than running with modified code is just doing the same thing but with. The standard things are no just with the release badge so it's yeah if this would be helpful I can set this up and like fairly easily as an adjacent thing. Um, and they can use the same scripts and stuff so that it's not an extra maintenance is just maintaining them together. Oh, and then yeah if you wanted to and I mean sounds like we want to try to get this merged before GCC hackathon and try to like advertise it at the hackathon for especially for newcomers, maybe, or I don't know, I don't know. Maybe it's too ambitious to get it public by GCC because it's a month away but I don't know, but just the tutorials and everything already written about the whole deaf stuff. I'm there just using PDB and local stuff so. Oh, that was it. I'm done. Any questions, comments, anything else, sorry I went over time. That's it for me that's fantastic work. Thanks so much. And thanks for presenting. Marius, do you want to talk about the pod. Um, so I jumped away and when when Alex announced that he will talk about this because I think it's a. Yeah, it's an alternative approach. I mean like a complimentary approach so get pot. As the name suggests, they're probably running some Kubernetes in the background but it's a service that is ready and then we've enabled on the main repo. We have a good point config file in the repo itself that sets up how an image is built. And then every branch and every pull request gets a prebuilt image. And from there, you can launch VS code. So I'm going to show that real quick. Let's see. All right, so you can see what I'm seeing currently on GitHub. Yeah, so I guess you may have seen if you've looked at the PRs before. You may have seen this thing here. So that's on every PR. And if you go to get put itself there this little extension, we can add a get pot button to your GitHub user interface. So then if you click on there to login with something. So I'm going to go with GitHub. What that is doing is it's creating a workspace. And it's going to pull container image that is configured in here. So this get pod.nmo file. I mean that's a bunch of things that prepares for development galaxy environment. We have some VS code templates that are ready to go. They are stored as settings get pod.json and not as settings that json. So if you use this code before, you know, you can put project specific settings in these files, but we don't want to clutter yours. So we're not going to check them in under that path. So basically this runs when the container is up. Then we need to install circuit PG tool, some development requirements, create the database, build the client. And all this happens when the containers built. So that's not happening when you run it that happens, you know, when you create the PR and then the image is being built. We expose the 48,000 by default. And that's it. Okay, so then here this, this has started it's up. You can see it's currently still fetching packages for the client build, but we can already start with us and kind of make it a bit bigger. So now I've started it from from a branch that I had been working on and then forgotten about but I see there are some tests. So I'm going to show how you can use that to to debug your tests. So the failing tests that were with failing in the legacy and the new API tests. So I'm pretty sure that's due to the code and not some random fluke. So we can go to the test results at the bottom and see which test is actually failing here. There we are. So, yeah, I mean, let's pick one of those. So this is, I mean it's a VS code browser environment so it works just like your local VS code. So you can search for the test. Okay, here's the test. And I was supposed to pre populate this because it didn't quite work. So that I mean when it's starting it's not quite done. So there we are. So once that is done you actually have everything set up to run tests and to run def galaxy instance. But it's the development requirement that are not pre installed with a build so that takes like a minute or two. Yeah, so I mean that's pretty cool for reviewing PR is because we can also just use it as your regular environment so you know if you want to see some context around this we want to look at methods and you don't want to scroll through GitHub. And then that's pretty easy. So the point of this PR is to enable changing metadata for jobs that failed, and where you already have jobs queued up and all that is failed is the metadata. So if that happens you're kind of stuck because we don't want to allow the change but it's safe to do it in this actual situation. So, um, yeah I mean this is kind of more comfortable. Looking through, looking through code that way then through the GitHub interface or so I think. But yeah I mean the real power comes when the setup is finally done. Deepak and VS code. So that's what I'm saying. So there's the, you know the dot VS code folder. These are the two template files so one for launch configuration, right, and one for the project settings. So here we set up all the testing modules. And here we have the actual launch configurations so they populate both what's available here. You can run the just tests you can run to a test within Galaxy and then this is the launch configuration so you're on actually Galaxy in the debugger, and you can stop it at breakpoints, and then for tests. That's it's finally done. We're currently discovering tests. You can go to. Yeah. So previously was saying that all these things were not available because it wasn't installed. But now this thing is rolling so means it's discovering tests. All right, cool. So here are tests. And also, so we already found the test we wanted to debug. There we go. All right, so we wanted to debug this one in here. So let's go to test. So that's in Galaxy test API test tool uploads. So that's API test tools upload tools upload. And then let's test upload and test validate. I mean they were both failing. If we click on the little debug icon there. It starts the pie test session. So that's all running through the escort itself. And within the debugger, you can. You can find the code that we've been changing. So this one here. Put a break point in there. So great points. I mean, like with most IDs, you just set the break point here and then it will stop and you can change things and inspect things. It also, you know, I mean the test framework opens ports. So the Galaxy starts up test server and then these ports, you can open them and you can actually see them in the browser. So let's do that. Here the URL, it takes a moment, and especially if you're in the break point and want to continue because I mean this kind of blocks galaxy that. Yeah, I mean that's that's where we are. So now the break point has been hit. And okay to edit metadata should be false because that file is used. Right, this is context for the, for the bug but I mean that's kind of how you can debug individual tests. You can also start an entire configuration. So here, I mean if the choice between the two test framework and debugging just unit test to a testing. What doesn't work is Selenium because you need a local browser or some X11 VNC magic. But I mean, yeah, that's kind of how you can do it so now. All right, then it found the port 8000 is active. So again, we can open that. And that takes a few more seconds. All right, there we are. And, yeah, your galaxy instance available and I mean this is like really bad ones this is exactly the same as if you couldn't get extended on a stage. So nothing is set up. I mean this is also really cool if you're working on multiple branches with, especially if you know they have different requirements and dependencies so you can just switch your browser tab and go somewhere else and work on other instance, without having to install the dependencies or build the client and so on. And, yeah, this is running in the debugger so you can also, as usual, you can intercept calls so actually, let's create a job and intercept that one. So we can run a tool so I mean, it also comes pre ordered with the test tools that let you fail certain things, you know, I mean, their job environment tools, test job properties, we can have tools that just fail certain things. So if you set this one here the job will fail, but the point is just, I wanted to hit the break point here. Right, so that's sort of the request that came in to like the job properties to version and so on. So, yeah, I mean, if something fails if you have a trace back, try to find where this is you can go in there and then play around with things so I could, you know, from the debug console you can also change things so in the payload. I mean this is a Python interpreter so you can change things. You could, for instance, change the tool ID to upload one or whatever you want, just as an example and then continue. And that's really super you're great for quickly iterating on what you're developing. Yeah, I hope you've seen how that's useful, I think, and we can stop here, but yeah I mean, it's not my exclusive development environment but for quickly reviewing PRS is great. Can you show real quick what I'm like saving a file and making a commit looks like. Can you do that environment. Um, you can. That was the. Let's see. Let's do some nonsense like this, which is nonsense. And then you have the. I mean, I don't really use the interface. Well, I mean you have the changes here. Oh, I guess that's how you add. What I'm doing. Okay, that's fine. I have creative branches and stuff from here. So how have your GitHub credentials, it doesn't right like if you commit it doesn't have nobody followed so when you start creating things, you want to push stuff. It asks if it wants to go get your credentials. Cool. I mean, even within this code I just use the command line so I'm not really sure how the interface works for it. Stupid question. I mean, there are limitations stuff that you would like to do, but you cannot. Yeah, so I'm using the bin key bindings. You cannot change the key bindings in the browser extension. Selenium, as I said, because you need to control the browser. That's not possible. I mean you can run the Selenium test headless, but I like to see what's going on. I don't think I've seen much that you can do. What's also a little bit annoying is that after an hour it goes away. So if you run the poetry update with our dependencies, which is, I thought I was being smart and just run it in the cloud should be faster than my local thing. It is, but it still takes more than an hour. So then, unless you like come back to the tap and refresh. Okay, so it just goes away. I think it's still keeps. Yeah, it keeps the change. Yeah, it keeps the changes also if you haven't committed it keeps the workspace, but the workspace starts back up. So you're currently running processes will be terminated, but your changes will be there. So, yeah. Yeah, I mean I, I can only recommend it. Also the config there is a good kickstart for your local VS code. And I have in my local VS code a bit more things that I should add back, but yeah, I mean that's how you can get started with debugging launching tests in there. Also writing test is really cool right so if you write an API test, but you don't exactly remember how the responses look like you just put the breakpoint there. And you can use interactive console to write your tests and when you're done it works and just copy what you did back into the code and you know that the test is going to work. It looks like it's one awesome amazing stuff. Thanks for showing that off. I'll stop the recording.