 All right. I started recording. Go for it. Okay. Sorry before you begin, I just want to say that we are prepping a kind of a so I was telling you guys about the global galaxy steering committee and all the government governance model so we're we're we're going to start pushing some documents towards community so we will need comments and for example, we formed the steering committee and actually any is a part of that committee along with Dan and Jeremy here, I believe. So, but you'll see more soon. So, and that note, any. I have to switch some tabs so I may have to share, reshare the screen because of just the way the screen size works and the presentation mode. I'll talk about a custom specifically and then actually start a little bit broader than that about how can we think about improving the galaxy server security across the community, and then some methods for handling user accounts groups and secrets within within So this is part of the justice project that was funded by NSF about 18 months ago in partnership with Marlon Pierce from Indiana University and Jim Bastian from NCSA. And talk about this to both collect feedback and just inform everybody about what has happened as sort of a midpoint and of this particular project and see where things can go. What interests are at the broader level. And so the four topics that like the cover is one is these recommendations for securing a galaxy installation that we've undertaken for the past six months with collaboration with trusted CI and talk about a completed implementation of delegating user authentication so rather than implementing everything in Galaxy is the trend has been over the last decade to try to offload this and authentication being one example of such approach. And then talk about some future plans from the project standpoint about ideas on secret and group management within Galaxy, and then an outcome that came up because I gave this at the lab talk at Hopkins. About 10 days ago and so an idea emerged and maybe this could benefit from having a new working group formed around these topics. Again, interrupt at any point you like. So to start off, you know, oftentimes I mean there is the security repository on the Galaxy project that deals with technical issues that emerge as issues, you know, at the implementation level. But there's often a lot of sort of additional security around the technical implementation that are more at the procedural and policy level that can help secure a galaxy server so things like who has access to a given server, what data is loaded, what kind of logs does a process have to be able to reconstruct a potential scenario and all so that is the focus of this it's not about the implementation technical implementation of improving the security. So the motivators for this have emerged over the past couple of years through active projects that a number of Galaxy members have one one being an evil project or the other one being Galaxy for cancer. And so the use cases are working with human genetic data so protected data sets that public servers are not really capable of addressing. I'm not not really they're not capable of addressing and secondly large data sets that are very difficult to move on to a public server. And so the implication is what I'd like to sort of presented talk about here is the after these projects have hit their stride and a lot of the implementation work is done. What can galaxy actually do and what they can community expand this into with these implementations. And so the, the part that has been done in this sort of policy on the policy side of things. This has mostly happened over the last six months in July last year, we did a security audit through an external firm that took a look at the cloudman application and the galaxy Kubernetes deployment. They provided a report about the implementation level. And it tells us to what can be improved to make this a more a more robust implementation, there's still some outstanding work left on that a couple of things have been addressed. The reason we chose the Kubernetes employment the galaxy is because it represents the most unified form of deploying galaxy that it has the least dependency on local infrastructure, meaning that you can have a local handmade implementation there's let's say 99% custom you can have ansible that is no some lesser percentage customers specific to a to a location. Kubernetes raises that to the next level where more and more of the deployment level is codified and then so this seemed like a good target for improving the implementation level. Capabilities to to adhere to the better security standards. Following that one off audit. We engaged with trusted CI through the science papers community Institute to lay this groundwork for improving the security posture of galaxy so this is a combination of a different look at kind of the implementation and recommendations for how a given site can use that implementation at their local location to to provide to implement the remainder of controls and then report this link that this at this URL that summarizes what has happened during those six months. And so what's currently ongoing is part of this talk today is the dissemination with the community so everybody's aware that this took place and what the outcomes of those those engagements have been. And then looking forward, you know this is certainly far from done so as part of the annual project, we've heard stories for the last year or plus from the doc store team, they are doing this but they're doing it officially. And it's been a year, it'll be another six months at the very least. The doc store team at the road spent at least two years doing the same so if we want to do this as a project, this is a multi year engagement that we would, we would need to sort of commits to. And more detail about this, this is like a short summary of it. And I did for the galaxy admin training, there's an hour long video at that link that kind of dives into more of this and I wrote those more talking about the outcomes of this collaboration and engagement. In short lines as to I guess what you can expect from that video if you choose to watch it. And is a look at the how can galaxy become more aligned with some of these compliance certifications. So the way this works and the way it's structured in the United States at least is that there's this National Institute for standards and this and they have a catalog called 853 that lists a number of controls that enhance a security posture of a given deployment of a given software service. And so, as a project you would implement those controls or explain and documenting a lot of this process comes down to documentation more so than the implementation of course there's implementation based implications. But it's not all implementation so you have this process of, you know, you have to categorize everything that makes up this service select which one of these services or controls you want to implement implement them assess the results. And authorize this is where official compliance comes into the picture so this is like a certified certified version of the controls that you have implemented this you get an external auditor that performs the process takes six months they work with you. And at the end, you may get something called the federal compliance and that can be at low, medium or strict levels depending on how many controls you've implemented from this 853 catalog. And then from these security things we can also talk about HIPAA as a private data protective data set compliance requirement that again comes out of implementing these controls and a specific deployment of the service. So the, again, this this is a multi year process assuming that we want to undertake it. But there are certainly lessons learned from doing this that can be implemented and can apply regardless of whether this is trending toward a compliance based galaxy installation or not. So, a very practical outcome of the engagement with trusted CI was the galaxy system security plan document or the SSP, which lists a set of controls and recommendations on how to improve the security posture of a galaxy installation. And this does include this catalog of components includes a set of deployment architecture diagrams. And so I've linked this document here at bit.ly slash g x y SSP. And you see, so here's the, here's the document that it's available for anybody to take a look at and see what the outcomes of it are. There's a number of controls that are stemming from that NIST 153 document. We went with the low classification of which controls we considered it provides a set of scroll down through the document a bit but you know it's a sizable one that it sort of creates a catalog of all the components that make up a given deployment in this case the galaxy on on Kubernetes. The architecture diagrams the flow diagrams of how processes that are act and all this is in service of creating recommendations. I can find some now. And then there's a little lower that that were recommended by trusted CI saying that, you know, for example, we have these access control policies and procedures so this is a site dependent solution saying like who has access to SSH who set has access to galaxy admin and all and then you can see. The controls are in fact sites dependent. However, some were classified as, as things that can be implemented at the deployment level. And I won't go through, you know, these but but that was the outcome at this point, some of these of course remain as undone they are highlighted, I think lower in the document that's the things that should be improved at the deployment level. Okay, so that without unless your questions of course wrap up the sort of the first agenda item for for the talk which talked about these how to secure provide a better more secure galaxy installation and what would it take for galaxy as a whole trend toward a compliance based installation. And so I'll shift on to the kind of more specific outcomes of the Custis project. So Custis has these three aims. One is management of user accounts and identities through an external service. So management of secrets are needed by science gateways secrets can include tokens they can include SSH keys they can include cloud access keys, and then third aim is the controlled access to digital objects across across instances of science gateways You know, so I'll say in the context of galaxy a little bit more and what these aims really mean. So the first one external authentication means that we can delegate user login from having a local based username and passwords stored in the galaxy server we can log in with existing credentials whether that be social identities, such as or institutional identity such as the employment institution and build this natively into into galaxies so it provides a good user experience without having to implement all of this in galaxy itself. So this is a storing secrets so Amazon or Google or or or key of JetStream for example SS keys to access that cloud to be able to access these secrets without those having to be stored in a local database. Is the group management so ability give ability to users to sort of self organize into groups so if you if you have a team you're working on a paper with or you have a lab. And you want to share a given history with five people you're not having to select five people individually but you can basically say I want to share this with my Hopkins teammates and select all of them that have once been added to the Hopkins teammates group. Our team. So. So before we go into sort of the details I guess at the high level. So the customers ecosystem is is depicted here. The idea is that before too long, there will be a service running for the world available to use customers.org. This is currently still in a testing phase so not actually available at this URL, but, but it's coming. The software that runs this is derived from the Apache airport our customers repository on GitHub and it's reusing key cloak for identity and user management and fault for secrets management, and this is going to be applied for as a as an Apache project as it matures a little The service itself runs as deployed via a help chart running on cloud man as one of our things and it's deployed at the Indiana University data center in their enterprise portion of the data center that is intended to run services such as payroll and things that that have high quality of service assurances from the data center operators. And then for the authentication itself, which is like I said externalize to identity providers that already exists. It's integrated with CI log on. And then on top of that we have science gateways galaxy in a lot of being the two flagship ones. And so we have to implement these wedges in these kind of dedicated gateways that allow simple integration with customers for a variety of deployments, but it's not limited to these projects. Other science gateways can implement can integrate with customers either more lightweight or more integrated as in case of galaxy so for example authentication can flow via the CI logons UI or it can be embedded into the gateways UI as the case has been a galaxy to provide a more a nicer experience for the users. So, they're at use galaxy.org, I'm sorry at use justice.org. As I said there is, there will be a portal where people can interact with customers. And this is predominantly intended for admins as a one off activity not not something you go to regularly. As I said, however, use justice.org is still not available. It's still in development. And so there's this test server available at test drive that use justice.org where you can log in and see what the, what the portal looks like at the moment. The actions that are envisioned for this portal fall into these three categories one is the ability to register new client applications. That's the only currently available featured in this test period. The ability that or what that gives you is the ability to register a new application of the meaning you get the secret and the token that identifies this application with identity provider is broker through customers. And this action is to be performed once by an admin you get those secrets put them into the config files and galaxy. And from there on all the users gain access to to that enable capability of logging in with the external credentials so the intention is to be a self serve process. But because of the restrictions on from the in common and trusted CI groups, there has to be manual approval process before you actually get the token. And the more future facing components of this portal are going to be the ability to manage secrets. So natively as I show in the context of galaxy some vision plans. This should be linked from the app from the science gateway itself, but as a user you can still go in and see like an aggregate view of all your secrets that you might have stored across different science gateways. And this is going to be centrally managed for multiple apps so in case we have multiple galaxy servers different apps and principle, it would be possible for a user to access those secrets even though they were stored once. And lastly, the idea is that we can upload sensitive content from that being stored in the application itself. The ability to manage users and groups. So again what I mentioned if you want to create a group, such as a lab or a classroom and treat that as a single entity capabilities for doing that will be added to the portal in in due time, and be able to share secrets you know I'll tie this more to galaxy in time. So I'll talk about the thing that's had that has happened over the last year or so, namely the features that have been enabled in galaxy on where, where somebody can start consuming them. So these were, of course, built into the core framework of galaxy. And the idea is that they provide a more native experience for users so instead of being redirected to the CI log on for example to log in with an external identity you can choose the external identity provider right from the guy CEO I'll show that. Secondly, it can be deployed with less configuration changes or with only configuration changes less changes to the gateway again I'll show an example of that as well. So from the sort of project management perspective. We, we may get better insights into users affiliations if they choose to sign in with their institution as opposed to social identity, such as Google where we don't know what users are actually coming from something that's useful for the funding agencies to proceed. So the benefits that I feel we gain with integrating with Custis is that we don't have to store local username and passwords personally I don't use sites that required me to log in with a username and password I want my Google login and not have to remember another password ever. So with that we can, we can do that in Galaxy. Secondly, we can link multiple identities to the same galaxy account so whether it's your Google account or Hopkins account. It's, you can choose to log in with either one. And this may have implications down the line as those services have additional capabilities linked to one institution so for example if Hopkins is to expand the capacity of one of the galaxy public servers for its users. You will be able to do that by having that linkage available. And then this is something that I have a question for everybody here that you might be able to carry the same identity across the use galaxy dot star servers, which I think from users perspective would be a very nice step in the right direction. The secrets management, as I said it's the ability to store sensitive content in Custis and retrieved from an application from Galaxy. Things like the dbgap SRA tokens or cloud store username and password instead of a disabling those tools on a server or beast. And then adding to store the sensitive data in the galaxy database, we fetch that from from a remote location, and similarly sort of the cloud credentials from cloud launch could be fetched, instead of being stored locally and ultimately the stuff I talked about in the first presentation is that we can improve security posture of use galaxy servers by not having to store a lot of the sensitive content to keep the login information and all as we presumably migrate toward a more compliance based installation. And so the external authentication. This is a demo of what what this looks like in galaxy. This is a local installation of galaxy so I can log in here with my local username and password as long as it's available on on galaxy at the moment. And so, but I don't have any external identities that have been linked so if I log out. And I try to log in here with the choice so this is what I was saying it's integrated with galaxies so it provides this native experience where you can choose from I think there's over 3000 identity providers. If I choose the Johns Hopkins as a list and say sign in. I'll be presented with this, of course the login at the institution. And then I'll be I'll be born here by galaxy. That may be creating a duplicate account. This is going to be so we had a little bit of discussion over the last week will modify this the key here being that users need to differentiate whether they are creating a new account, or linking an existing username and password based account with with their existing account and so in this case because already have an account I don't want to create a new account. As I go back to login login with as username and password. And then from here, I can link that external identity. Once select a login go through the odd flow. And so now I'm going to continue to be logged in as with my username and password ID, but if I log out. I can go back here and instead of logging in the username and password, I can log in through to the IDP provider. So it'll just log in again, maintaining that original username that was that was linked. So that's that that's the external authentication part. That's available we haven't rolled it out to main yet, but because of some of the recent suggestions that we've received. So that's, that's hopefully coming. The other thing is that this allows is filtering the login institution so if given galaxy deployment does not want to allow any institution from that list of 3000 to log in but like institution based logins only currently that was enabled through this email domain allow list file where had to list domains and subdomains so for Hopkins there were at least three subdomains that needed to be explicitly listed. And because of the way the in common IDP providers work. There has to be a unique ID that corresponds to a given IDP. And so here if we, we can now specify this list that the list that includes all of the, all of the subdomains that might exist and so full list is available here for for listing which identity providers should be included on in a given list and at that point that list of 3000 will be filtered to whatever is included in this list in this case it will be just Hopkins. And so this we get to the sort of the one last part of the presentation, which is features planned or discussed a little bit at least for the remaining period of the customs project and hopefully beyond hopefully broader. If people find interest in this. This said, the three services that customers provide one is the identity management we've seen the implementation of that second one is the secrets service. So it allows us to want to store tokens passwords access keys SSH keys that can then be accessed by by galaxy so again, like a dropbox token instead of storing locally for accessing users individual tokens through the upload data plugin. That could be stored in in customs and retrieve. So what would this, I guess at the very high level look like at the moment if we do this dropbox token stored in the local database and retrieved when the data is being uploaded. So in case it would be retrieved stored via the API and retrieved via the API in the custom secret service, which behind the scenes is stored in the in the vault and user if they choose to they can manage it through the portal. In case they have multiples and then want to do sort of aggregate or bulk additions. So something that might be possible, or it's a policy level would be possible is for a user to be able to store this token once to say, I'm on use galaxy.org, I want to link my dropbox accounts to this. To my account and do so once through the API this this secret gets stored in customs. And then the next time the user goes to one of the other use galaxy servers and because the through the identity. If they use this the same identity through through an IDP on both servers. They can get authenticated as one of the same or recognized as one of the same user on on customs and so be able to retrieve that same secret from a different server. They can get authenticated from other applications, such as cloud launch. However, in order for this to work at the sort of custom slash key cloak implementation levels. Both of these servers or all of these servers would need to be made part of the same realm in the local terminology and so it would require basically coordination and agreement across these use galaxy servers to say that they are different tenants within the same realm and can see each other's users so we would get away from who would slowly start getting away from each galaxy server being a completely independent island and start aggregating that information across multiple use galaxy dot star I think that it needs to be I guess the talked about but are there any gut feeling reactions about this whether this would be a desirable feature from both users and admins perspective or no topic, you know, GDP becomes maybe a bigger concern in this case for servers that are not in Europe, but have to adhere to some of these rules so anyway it's any any comments. Actually, I have a question on the token actually. So we're not supposed to use the token right that's a developer setting. Normally you would need to do the old flow and get a regular authorization from Dropbox. How do you see customers fitting into that this could help us there. It's, it's much simpler for the user if we implement the old flow so I've done the Dropbox thing and it didn't work for me, but I know a lot of people it has worked for. So, so I agree that it would all flow would be simpler. We need to refresh the token right. Yeah. I don't have an answer for you. Something to, I guess, but yeah, I mean that would be better. Yes, I don't know the implementation bits though. But you know okay so Dropbox token if we get away from that, you know, there are other examples of keys that can be stored that don't have a lot implemented. I'm sure like username and password must be one of the most requested features we have for tools that need to do authenticate and authenticated actions that will be really good. I don't know what time what time it is actually on. So, so the next kind of the last category or the last name of the customers project at least is this ability to provide groups and user management, allowing users to sort of self organize into groups or teams so imagine having a classroom or a lab that you can share artifacts with so for training for a workshop or for a classroom setting a teacher can provide a sample history and instead of having to select 30 or 200 students. Every time they want to share a history, they can instead say, you know, here's a group and every time I shared with a group. My group leads can then monitor what has happened across a given group I have a more specific example about that. And you can permit access to secrets so in case you have a lab and you for example as a PI have a db gap authorization where you which you can delegate to your lab members postdocs, whoever makes up the lab. Instead of giving them the username and password they have to plug in themselves. You can just allow them access to the secret as a member of this lab group. And lastly, can imagine this idea that you can create a group overview dashboard see what's going on across multiple people if you again, or maybe a PI or something at an institution and need to track what's going on within a galaxy. A potential implement, you know, a view of what this might look like in terms of the groups in galaxy. So the ability to create these groups as an individual as a member as a user of galaxy. You can create group by adding members of the group once created group and then from there you can say manage secrets, such to say that you know this group has access to these secrets and and from there on again they don't have to have a view into what the secret is, but can simply get get access to whatever the implication of that of that is. The second thing is kind of going toward this group view sort of the inverse of you being able to share something with a group being able to consume something from a group. So the way we have the multi history view you can imagine instead of all those histories being your own, they can be from multiple users in an aggregate view. And you see again, a classroom setting for example is great example you can hand out homework and retrieve feedback that way instead of having to import a bunch of histories and look at him browse and click around. This one I admit it's probably a very large amount of work that would certainly require some buying from a broader group of people then then the effort on this project can actually sustain but see a lot of use cases for this so maybe there's no interest. And so, so far again, this has been the people that have been involved with implementing what's been implemented and planning to making the agenda for what's coming, working on the trusted CI engagement. And the last thing I want to talk about is this, the idea of a new working group that emerged that the Hopkins weekly lab meetings. The idea would be to explore sort of these recommendations and solutions for improving the galaxy server security posture so pushing further on the, the engagement that took place with with trust CI the galaxy SSP document compliance based on projects like the ITCR for example, namely the galaxy for cancer, where there's going to be some stricter requirements what can we do at deployment level to make sure this is adequately secure. Secondly, coordinate development of Aussie, often approaches. I know we have to parallel implementations one based on PSA and this other one based on key cloak. We have a very strong opinion that we should implement lesson galaxy and delegate more. And so instead of implementing this, whatever we can just delegate it to an external provider of services. Key cloak being an example but I get at least coordinating that across the project would be nice. And lastly sort of work toward adding support for these groups and more generally role based access controls in in galaxy over, you know, the years to come. So that some of these group management capabilities become possible. And so I added a link down here at the proposed a working group to tackle some of these. And a periodic meeting time, that would kind of keep interested those interested in interested in working towards the same goal. So that's, that's what I have in case there's, of course feedback comments ideas. So we, I mean, I guess the major thing here is, we need to implement then the role based access control in a way and tie it to our database so whatever we do, we don't think we can just use an external service right away we'd always have to have an adapter. I don't know if maybe the roles that we already have can can do this so that the external provider would just do the linking of the roles and groups we have. I mean it could right I mean it was, yeah we talked about this of, I don't know, two, three months ago, that it could just, there could be a local implementation it's like a probable based implementation that provides a local and local plugin that stores everything in the database, maybe with limited capability, or just falls to do something and then plug in an external provider that provides the native implementation, or the. So, yeah, I mean this is an architectural. I think it will be interesting to to see what we can do because sort of not having a real RBAC setting, or something that is really complicated also limits us in implementing graph QL for example, where typically you do that against our back. It's all much, much simpler than what galaxy does right so you typically just check, can the user do this action that for us it's, it's a whole hierarchy that is encoded in the relationships in the database. So it sounds like super challenging. It's definitely worth thinking about. So to ask the question Marius what you just described the, the existing database has a bunch of fields and columns and the actual tables which have owners or group group IDs or whatever for our current system is that what you're describing. Yeah, so it's a it's a mixed thing so everything has an owner. Mostly, I think. And then we have roles. You welded it, you welded it to the data tables they aren't separate tables that are joined or something like that so it's not going to be easy to pull them out. They're separate. They're separate tables there's a. We have this hybrid thing right. Yeah, it's pretty intense. It's a very custom implementation of our back, I guess is what's what's Marius is getting at. Yeah, it'd be a lot easier for it were like a bunch of just joined tables, you know, that we could stop you so the adapter idea might make a lot of sense though right so if we can read. We can build an interface such that we could point it at hours or an external one. I mean, trying to sync stuff is just going to be a mess right yeah that was also thinking that we should have some adopters, we can use our own database or some external if the user is a kind of external user. It would be interesting to to see if anyone is already working on that from other projects I think Norway, someone in Norway was doing some external kind of role management connected to galaxy. I think it was working on something I maybe doesn't have anything to do with this. Just, and this do you have any ideas about how like we could balance. I mean, I love the idea of the new working group, and I love the idea of it having tasks. It seems like it seems like we were like at capacity and then if we just add a new group and a bunch of new tasks. I don't know that I mean I yeah I mean good frankly there's a project I'd love to work on right this is sounds really cool really valuable. Yeah, the resource capacity is a, you know, we're certainly at capacity so I that I don't really have an answer for you know the hope is that we continue to grow. And, you know, if it's at a forefront of the planning. You know, maybe something else. You know if this is important that this is of interest. Maybe something gets, you know, instead of having three or five things on a to do list for a couple of other groups maybe drops to two or four things and a couple of things get added to this group. So, it's a balancing act. Absolutely. I don't know, you know, we've come up with more people that would be the answer but you know, making strides in that but it's not growing fast enough as the ideas are growing. I mean cost us is so there's, there's people at Indiana that are working on a lot of this. So it can certainly be in coordination with them they're very eager and happy. They would be thrilled to see more of it adopted. So there's some potential for both actual effort, likely, and certainly guidance and compromises in terms of what the capabilities needed are as the implementation on on their side happens as well so maybe that's a helping step in the right direction. Okay, is there. Should we move on to the next topic. I mean maybe Alice. Yeah, maybe just a question and as do you know how cast this is working with GA for GH password and the life science AI and those projects. It's not. So that was a. These are unfortunately parallel efforts. So the passport is funded by the NIH through the RAS mechanism. This is funded by the NSF Jim Bass me who is the most qualified person has become aware of passport, the passport about four months ago maybe or so so it does look interesting. I have what I call a very similar if not parallel implementation called side tokens that functionally enables a very similar capability, but unfortunately, those were not known to each other until kind of recently. Alex, you want to take over and talk a little bit about it's on Kubernetes. Sure, I mean, I mostly just wanted to, I guess, bring up that I got them to work on Kubernetes with service and ingress without the reverse proxy, or the only change in the managers that needed to be done was removing the encoded job ID, either removing it with just the job ID itself or removing it altogether. I thought that's okay because the token is the UID that gets generated for each IT so I thought the URL would still be unique wanted to make sure that that assumption is true. I don't know if there are objections to doing it without the reverse proxy or if somebody thinks it's better to do it with the reverse proxy and how that would work behind the Leo proxy and or yeah just I mean, at least start the conversation now we can also just break off at a later time with whoever knows ITs and what and what cares about ITs I don't know. What people care about ITs. I mean I think that sounds awesome. Like, I don't think we're tied to the proxy. So that's cool. Alex we were showing that high glass works. Is that somehow transferable to main form or shape, or any other instances, as long as there is a Kubernetes cluster that is exposed. It works anywhere. It works on GKE it works on the clusters that we deploy. So, I don't know what I know there are some limitations with the cluster that main is using. In theory, I could just, you know, I can boot up a cluster on our jet stream allocation for main and we could use that. So, can make it work like that. Relatively fast, but I don't know about the cluster that means currently using with the limitations are a question to Nate. He added a comment in the chat saying that it should work. Nate, do you have any thoughts about removing the reverse proxy. Nate has no microphone. Okay, well, I would be careful about moving removing that other uniquely identified part of that from a security perspective right I don't have a claim against the proxy that's fine but the token is not guaranteed to be unique. Right, it's a combination of the item and the token that gives you the security. Okay, would it be possible to encode the job ID not using. So right now it was using chance dot security. So it needed to happen somewhere with chance comes from the web handler I'm guessing. But the way I did it, everything's happening in the runner. So, if we can just like, you know, salt it more generically or encode it more generically without using chance security then I can put it back. So we should be able to. Trans yeah. Yeah, so you have app dot security access in the runner. Okay, we can fix that. I can put it back with that then. Okay. So everyone would like to meet for a longer conversation I'm meeting next Friday with Rob to talk about an and well how it will work with the Leo proxy and what they need what we need from them. So ideally before that just even a brief conversation with people who designed it is originally like just no it is well just to make sure that I'm not doing everything wrong and just appears to be working. Sometimes when it has a microphone. It's on Friday. So anytime before the next Friday, or anytime next week really. Okay, not tomorrow. Yeah, no not tomorrow. Next Friday. I got out my laptop so now I have a microphone. It wasn't expecting to talk. So what was your question about removing the proxy. Yeah, can we delegate to Kubernetes to handle the, the proxy into the it. Well, is this about the the integrated proxy when galaxy is running in Kubernetes itself because you certainly don't need the two level proxy, or was it using ingress as the proxy. Right now. So basically the way I did it is I'm assuming there's an ingress controller in the cluster, because for the clusters that we use that's how we're exposing galaxy. Right. So then I removed everything from the I put the configure entry points like in the job runner, and then just get the token from the database you form the URL and when you're launching the job at the same time you launch a service and an ingress, or the job runner launches a service and an ingress that exposes that it at the unique URL. So, yeah, I don't know. Like in what the technical terms are for when it's less boxy versus a normal proxy versus. Yeah, whatever. So, so instead of running the proxy and having the traffic go through that you launch a service that that then. Well, you launch a proxy service in Kubernetes. It's proxy not. Yep. Yeah. I don't think there's any reason why that would not be a good solution for Kubernetes. Yeah. Do you think that's feasible for me as well. No, because main doesn't run inside Kubernetes. Well, but the ideas are running on Kubernetes. Yes. So couldn't you just expose and ingress controller there and do it the same way. You're going to have to map. And maybe you could. I mean, so the host name is going to point at. I guess the. And the way it's done with the DNS is just a wildcard that's pointing out what the ingress controller is. So it's just one wildcard DNS towards the IP of the ingress controller and the sub domains just work. That could work then. Yes. I think. I'll take a look at the implementation and see if I can think of any reason why I wouldn't, but. All right. Well, awesome everyone. Thanks for the good meeting. Alex, should we schedule that meeting for next week right now or do you want to. I mean, if there's a time that everybody has free, we can if not, I can send the one is good or something. Oh, I guess the laser hand if I like say something in the chat or slack me if you want to be in that meeting and. I don't know. I can just send it on the public channel and everybody can see. All right, I'm going to stop recording. Thanks all.