 So today, securing user-to-server access in Kubernetes. Hi, I'm Messem. I'm an engineer at TalesKill. And I'm Mike at TraffsGame, the head of product at TalesKill. We work at TalesKill. Today, we're here to talk about a specific problem that you might have, which is allowing users to access internal services that you have running in a Kubernetes cluster. So we're talking about the network security of connecting users to services in Kubernetes. We'll cover first what kinds of traffic you might need to protect in Kubernetes and why you might have that kind of traffic going to your cluster. Then we'll specifically look at users accessing internal services in your clusters. That's the focus of this talk. So we'll look at the security properties that you likely want for protecting access to these internal services. And we'll go over the various options that exist in and out of Kubernetes for protecting this traffic, and then compare those. So analyzing which options work for the security properties that you actually need. This talk is geared or meant for network administrators or security teams using Kubernetes with some understanding of how Kubernetes works. So let's jump in. There are many kinds of traffic that might be going to your Kubernetes cluster. First, it's the traffic that you need to get Kubernetes working. The traffic between the components of Kubernetes, like from the Kube API server and the control plane to the Kube Lit in your working nodes or pod to pod communications. Hilariously, I gave a talk on exactly this topic four years ago now at KubeCon, and I've forgotten all of it, but if you want, you can watch the talk. The traffic that's managed in Kubernetes mostly has authentication, integrity, and encryption. Again, watch the talk if you're interested. The second kind of traffic is traffic from a service to a service, such as from a front end application to a database. This can be both intra or inter cluster. It might be coming from another Kubernetes cluster if the two applications are in different trust domains or hosted on different platforms or just managed by a different team. Or if you're using a microservice architecture and the full app runs across multiple clusters. This service to service traffic is often controlled, monitored, and secured using a service mesh. The third kind of traffic here, traffic from a user to a Kubernetes control plane for when a member of like your DevOps team or Infra team is accessing the control plane to manage cluster configurations, you can hit the IP of the Kube API server directly and authenticate a Kube CTL, but many folks secure this connection with something like a Bastion. And then lastly, traffic from a user to a service that you're running on Kubernetes. If this is a web app, for example, then this could be an end user of your application who's trying to buy a dog toy or on your website or if it's an internal app, then it's an employee of your company trying to run a check processing job or something like that in a bank branch. And if it's a public app, you might use like a load balancer to manage direct access to that service. So thinking of those like four kinds of traffic that we just talked about, security folks and those are the typical solutions we also mentioned, security folks worry a lot about how to secure access from a user to the Kubernetes control plane, like with good reason, mostly because accessing the control plane would give you full control over the cluster. However, we're not here to talk about that because there's so much guidance on that topic, but what we are talking about is that last item there, a user accessing internal applications that you're running on Kubernetes because there's a lot less guidance on how to do that and how to do that well. An internal app that you're running on your Kubernetes cluster is different. Depending on your circumstances, some of the other options listed here, like a service mesh or a Bastion or a load balancer might apply to how you protect traffic from users to internal services. So you're asking like, do I really need guidance on this topic? Probably the Kubernetes docs unhelpfully tell you that your service may or may not be public and also it may or may not be authenticated. I'm not out of the docs, like there's so many potential options that you can have here. So it's on you to implement a solution. All right, so what are some examples of kinds of apps that you have, these internal apps that users are trying to access? There's many internal services that might be in your cluster. When you're running a service in Kubernetes, there are a lot of other things that you run alongside your actual service that you need to maintain and then be able to access. Anything that started by Kube system, so databases, monitoring, logging and tracing, other tools, those should only be accessible to your Infra team. And if you run the Kubernetes dashboard or like the web UI, that's another monitoring service that's only meant for your Infra team. You might also have some other internal applications for your organization that you manage and host on Kubernetes, just like web apps for your customers. Maybe there's an easy way internally to look at a metrics dashboard or look up the employees in your company in a wiki of some kind. And so do you really need to secure access to these applications? Well, probably they might include sense of information that could be used to learn your organization's data, like customers or financials or at the very least, they probably shouldn't be public. And these services shouldn't necessarily be accessible by everyone at your organization, so you are actually gonna need some kind of way of authenticating or restricting access even internally. So let's look at some criteria. And these are the criteria we came up with, right? When you're hosting an internal application, there are several security properties that you're gonna want the service to have. First, visibility, this is hopefully self-evident, but if you're hosting an internal application, it should well actually be internal. You want something that's not publicly accessible and restricted to a set of individuals who are part of your organization. Authentication, this ensures that you know and verify the source and destination of the traffic. That is, you verify the user and the service they're connecting to. Authorization, stricter than just visibility, authorization ensures not only that your service is internal, but only the right individuals in your organization have access to it. You could also build authorization into each of your applications rather than as part of the central access solution, but that's putting a lot of work on each of your application teams. Encryption, by encrypting your traffic, you're preventing just anyone from reading it. Encryption ensures that only the authorized parties can actually read the traffic, so that any unauthorized party that intercepts traffic can't read its contents. Load balancing, if you're running multiple instances of a service, you want to load balance between them so that you're not overwhelming a service in any one node. This falls into the availability part of security. And traffic filtering, your service might not allow all traffic to access it. You can fill the traffic for unusually high requests, like in response to a DDoS attack, or to rate limit one user who might be limiting another user's ability to use a service. Internal applications might also filter traffic based on other criteria, like location. Maybe you don't have any employees in Australia, and so it's weird to see traffic coming from there. And lastly, auditability. You want to monitor and log information about the traffic going to your service so that you can ensure it's acting as expected. You also need these logs in case you need to review access as part of a security incident. And there are other criteria here that we haven't mentioned that aren't necessarily purely security focused. You're going to want to have for users connecting to internal services, namely latency and availability. But like I said, we're going to focus on the security properties. So let's keep these criteria in mind when evaluating solutions for accessing internal apps. All right, so what options do we really have? The first couple options laid out on this slide are Kubernetes constructs. They're what Kubernetes provides out of the box with cluster IP. And then on top of cluster IP, you can use a Kubernetes load balancer, which allows you to load balance traffic hitting a service across multiple different instances of that service. The traffic could originate from either outside or inside the cluster. Kubernetes ingress, which allows you to route web traffic from outside the cluster to services running inside your cluster, and Kubernetes network policy, which allows you to restrict access to a given service. The next set of options we're going to consider are some of those that we mentioned earlier that are typically used for other kinds of traffic in Kubernetes. So a service mesh, which allows you to route traffic between two services, which could be in the same or in different clusters, and a bastion to enforce traffic to a sensitive resource that goes through an audited and controlled entry point, like restricting access to the Kubernetes control plane. And lastly, we're gonna throw into the mix some more generic options for protecting traffic between users and services at layer three, where the underlying network is encrypted or untrusted. The first is IPsec, which is a protocol for encrypting communications between endpoints, and then WireGuard, which is a more modern tunneling protocol for intent and encrypted connections. And those can be used standalone or as part of VPN type solutions. So I gave you two lists, a list of all the criteria and a list of all the options. So you know what we're gonna do, we're gonna compare them. It's a very exciting talk. All right. Thank you, Maya. So let's dig in. So as Maya mentioned, the basic building block for services on Kubernetes is a cluster IP service, right? Which basically provides a stable IP and DNA stains for accessing pods within a cluster. The virtual IP is shared among the pods, all the traffic routed to that IP will be routed by the cluster to that pod or to any of the replicas. If the pods come in or are scaled up or scaled down, those pods get removed from the service destinations. The traffic keeps working fine. You shouldn't see hiccups at that point. However, the cluster IP service is internal only to the cluster. It is not something that you can hit from outside the cluster without doing some additional steps. The other thing is that the cluster IP also does not provide any encryption. All it really does is that it takes your packets that your application would get normally and it just gets them from different sources, from different parts of the cluster, different nodes, different pods. It also doesn't do any authentication or authorization out of the box. You need to add authentication and authorization to your application layer. Unlike cluster IP services, load balancer services are actually publicly visible. These services basically take a cluster IP service, which is pointing to multiple replicas of your pods and it gives it a public IP. The problem is that that IP is public, which really means that anyone on the internet can reach it. So it puts even more on this application layer to add authentication and authorization and to do traffic filtering. A lot of SASs actually do something similar, not necessarily in Kubernetes, but they have IP allow listing and so that you can only restrict access or you can only access the service for using some well-trusted IPs. The third type of thing that we use to build all these services is Kubernetes ingress, which is a collection of routing rules that specify how traffic is routed to the service within a cluster. So you could imagine that you have a web page which is running your toy app or your internal banking system or CI system or anything. And you could expose all of those as different services with different load balancers and all of them could get different IPs, but instead you could use Kubernetes ingress, which provides a way for you to reuse some of those, like use a single public IPs in public instance and route multiple domains behind it. It also allows you to do traffic routing based on paths, not just host names. Some ingress controllers like NGINX and traffic provide authentication and authorization using OAuth and JOTs, but it's still really, and the really nice thing that they provide is TLS encryption, which is that any traffic hitting that service will automatically get an HTTP assert and make sure that no one on the public internet can intercept your traffic and see what it's going on without doing, no one can man in the middle of the earth. The fourth type of thing that we use for all of this is Kubernetes native policy. What this does is that it says for a set of pods or a set of namespace or a set of services, it can restrict what traffic goes in and out of that pod. So you could say that only a namespace can talk within itself. You can say that only these pods can talk, only like the front-end pods can talk to the backend pods, only the backend pods can talk to the database pods and you can restrict access using that. You can use the network policy in addition to like Kubernetes load balancers or Kubernetes ingress to further restrict who can access it, but really all this provides is a way for you to secure servers through service communication. It really doesn't help much with user-to-service communication because users are on the public internet and they're gonna be reaching over the public internet. So you would need either to do IP allow listing like I mentioned earlier or you would need to do something more fancy authentication and authorization on like different parts of the user identity. The other really interesting thing development that happened in Kubernetes or in this space was a service mesh. What this does is that typically in Kubernetes when you have pods communicating with each other, the traffic is flowing between them is plain text. It is not encrypted. It is not, when you're sending traffic to someone in your network, in your cluster, it is not necessarily, you're not always, there's no way of to guarantee who that traffic is coming from. You could spoof the traffic, you could spoof the IPs, you could do a lot of, you could do a lot of funky things in order to get around the restrictions. However, what service meshes provide is that they run, use it aside car proxy next to basically all of your pods. That side of car proxy will intercept all of the traffic and make sure and it will, when it's communicating with other services, it will form an empty LS connections or mutually LS connection and make sure that the communication between the services is strongly authenticated and you know that two services are really, that when you're talking to a service, you're guaranteed that it is that service that you're talking to and you're also guaranteed that the person who is calling you is the, or the service that is calling you is that service. That covered me. So the next option we have is a bastion. So a bastion host is a server running an application like a proxy or a load balancer that serves as the entry point to an internal service. Traditionally, a bastion is the point of entry to your network. It's the hole in your firewall and following a traditional network model because it was a single point of entry to everything in your network and everything inside of your network wasn't encrypted or authenticated since this is before zero trust trends then the bastion was particularly strongly protected. So once you got past the bastion then you were just into the network. So a bastion is meant to be a single bottleneck for anything that needs to flow into an application and by forcing all the traffic through a bastion you can then have a single place to enforce authorization, authentication, do any filtering you wanna do, and then log access. So it's your point of entry, yeah, but it's also your point of policy enforcement. Typically, a bastion is often just like open SSH set up on a host, USSH into the host and then from there you can reach the resource that you're actually trying to access. So in terms of visibility, the bastion has to be publicly accessible which means that others at minimum know that it's there even if they can't actually access it. You can authenticate users to the host based on SSH username and password, keys or certs and use this for authorization decisions. Although you might also be running a bastion with a single user that everybody logs in as and so you might not have that if that's what you've set up instead. Open SSH is encrypted so you get that. Bastions don't have any notion of the services that you're connecting to, they just say like, oh, you're on the network now, you're in, and so they don't try to load balance or in any way or do any kind of sophisticated traffic filtering once you're in. And if you're using open SSH, open SSH lets you log messages about what the SSH server is doing. There's obviously more complex bastion offerings in the market now which have better logging, simpler managed authorization and some traffic management like I mentioned, but there's also hosted proxies that are provided by the cloud provider. So if you have something running in a cloud that might be the better solution for you to have it managed to manage for you on your behalf. All right, so those are the options that we're considering that are used for other kinds of access and Kubernetes. So now let's talk about kind of generic connection and protection options. So IPsec, IPsec is a layer three protocol for encrypting information between two endpoints. It's used for transferring data or as part of a traditional VPN to encrypt traffic when it's on an untrusted network. IPsec can encrypt the whole packet in like a tunneling mode or just a data packet in a transport mode which allows you to inspect the headers for like more complicated routing you might be wanting to do in your network. To use IPsec, you need to have both the source and destination hosts that you're connecting from and to from a key exchange to establish a tunnel of which traffic is then sent. So for users that are connecting to an internal service, that means in practice that each user's device will need to be able to initiate an IPsec connection. So for example, by installing a client on each of those user's devices. This is the most common implementation, not using IPsec in like a standalone hacky way but by using it as a protocol on a VPN. IPsec VPNs are a common way for users to access internal services no matter where they're running, not just on Kubernetes, but they could also be used to access services running on a Kubernetes cluster. So if we're looking at IPsec and IPsec VPNs for this use case, in terms of visibility, IPsec doesn't care where it's connecting to as long as they're reachable. So if you can have a service on only private IPs but you'll need your VPN to have something like not traversal to actually make them accessible. IPsec provides authentication and encryption of IP packets. It doesn't provide authorization natively but probably if you're using an IPsec VPN that's exactly what it's doing. It's giving you authorization. IPsec doesn't have any built in traffic management like a notion of load balancing or traffic filtering. Some IPsec VPNs if they're using concentrators which is kind of like a bastion, you put all your traffic through there to get access to other stuff on your network then they funnel all the traffic through that central point. And in that case, a VPN concentrator might allow you to do some load balancing or traffic filtering. And lastly, auditability. Again, since IPsec is just a protocol, you don't get any of this for free but with an IPsec-based VPN you can expect to get network logs of which users are accessing which services. In some cases you can also introspect the actual traffic like the packets. So not only would you know metadata about a connection like Alice is connecting to the HIRS system but specific information like which SSH commands are being run by a specific user on a specific box. And the last option we're gonna consider WireGuard. So if you're not familiar with WireGuard, WireGuard is a layer three tunneling protocol that lets two peers privately establish an N10 encrypted connection. WireGuard uses public keys rather than public IP addresses to identify peers. So as peers move, connections can still persist. The only thing you actually need to configure is which peers you wanna communicate with. Like IPsec, you need to have both the source and destination hosts that you're connecting to and from perform a key handshake to establish a tunnel which is then used for traffic to include the traffic that's being sent. So again, for users connecting to a service, each user's device will need to install WireGuard. However, compared to IPsec, WireGuard is explicitly designed to optimize for security, performance, and ease of use. And so WireGuard, given it has like opinion and modern cryptography, there's very little things to configure. The only thing you really do configure is like I said, the set of peers that you're gonna connect to. You're not picking protocols for encryption, all that kind of stuff. So looking at WireGuard connecting to internal services, WireGuard like IPsec lets you connect to hosts anywhere as long as they're reachable and can continue to connect even if an IP address changes as I mentioned. So a user could initiate a connection to their database on their laptop at home, bring it to the coffee shop and finish their work, and then the connection will still continue with no hiccups. You can also connect to private IPs like with IPsec if the VPN that you're using offers something like natural reversal. WireGuard has built in authentication and encryption. WireGuard uses the StreamCypher Chacha 20 for encryption and Poly 1305 for authentication. Authorization is managed based on the configured list of peers. So if a device has a peers public key, then they can communicate, but there's no other authentication built in. Again, like with IPsec, VPNs that are built on top of WireGuard would add authorization. That's what you're using a VPN for. And like IPsec, WireGuard doesn't have any built-in traffic management for load balancing or traffic filtering. Again, if you're using a VPN and it's in like that concentrator model, like that funnel model, then you might be able to use it to do load balancing. And like IPsec, there's a theme here. There's no built-in logs or monitoring for audibility of a WireGuard-based connection, but you could expect that from a VPN that you use that it's WireGuard-based to have those kinds of logs. Right, so let's do something a bit more fun. Let's do a demo. Yeah, so since I work at Talescale, we're gonna demo Talescale, using Talescale to reach a Kubernetes service internally. In this, Talescale is a WireGuard-based mesh VPN. Mesh network means that traffic doesn't go through a concentrator, but connections are directly peer-to-peer. So you get better latency, you're not, there's no single point of failure. Because it's based on WireGuard, all of your connections are always end-to-end encrypted from like one device to a different, from any device to any other device it will always be end-to-end encrypted. So in this demo, we're gonna set up a service in Kubernetes. Then we're gonna set up something that we just are about to release. Or didn't. Yeah. We're dropping some new stuff. We're dropping some new stuff. Soon. Not TVs. Yeah, hopefully. So yeah, we're gonna install the Talescale operator. And using that Talescale operator, we're gonna expose the service to our TailNet. And let's see what that does. Okay. So I'm gonna, I'm sorry, this is gonna be annoying because I have to change this place. Can you see the same thing now? Okay, cool. So I have. Okay, so I have this mini cube cluster that I created yesterday. It is about 17 hours old. You can see that it has nothing running on it. There's only the cube system namespace which has pods running on it. There's just, it's a blank cluster. There's really nothing here. So the first thing I'm gonna do is I'm gonna create this service called Glances. And this will get me some pods. And we're gonna hope and pray that the pods come up quickly and they did. Okay. So then we're gonna expose that service. And you can see this. I'll just show this to you. So it's literally just a simple deployment. There's nothing fancy here. It's just running in bit mode. And I can show you the Glances service. This shows, again, standard, nothing interesting. I'm gonna now apply this. And we can see that as we were talking about this earlier, we have a cluster IP that we can now hit. Well, if we were in the cluster, we could hit it. If I do this, oops, nothing will happen. But what I can do is I can do port forward. I have port forward somewhere. Yeah, there you go. And then I can go to local host 5000 and I get this fancy thing that works. But obviously this is running on my local machine and that's not useful. So what we're gonna do is we're gonna now apply slightly, we're gonna create the operator, which, so this is a tail scale operator that we're just releasing. You can see that things are coming up and things are there. And as the tail scale operator starts, it joins my cluster. Oh, it joins my tail net. You can see that I have tail scale running here and the operator is here. It's not that interesting yet. We have some plans on how to make this more interesting. But anyways, back to this. So now I have the Kubernetes operator running. I'm gonna do a diff of this and this. Now, to show you that all I'm doing is basically making a load balancer class. I'm changing it from, this is MiniCube running on my laptop. I'm gonna say that no, make it a load balancer instead of a cluster IP and use a tail scale load balancer class. I also tell it to use the demo host name. So let's do that. So you will see that the tail scale operator has spun up a new pod and has given me a URL that I can hit. So I'm gonna hit this URL on my machine and this is not gonna work. But this might, hopefully, maybe, if I'm lucky. Oh, it does work. No port forwards, no nothing. I have something that's running on this. I hope that it doesn't, isn't always this slow, but I guess Wi-Fi is fine. While we wait for this to do stuff, what we're gonna do is we're gonna use a new feature. And this is not part of the operator that we're about to drop. This is coming soon, but it's in a branch somewhere that I'm working on. But what this does is, so there's a, I'm sorry? It's your code, fresh. Yeah. You really do. It's like literally arse old. So we're gonna, and this is the other one, which is diff glances funnel with this. The only difference is that this is another annotation that I add. And as you can see from it, what it does is it uses the new feature in Tailskill called funnel, and it will allow you to have a public IP on your machine, on my laptop, sitting here in this booth. All right. Okay, it loaded, yay. Now, what has, let's see. Yeah, it seems to be working, okay. So now I can go to this. If I do that, it starts to get a certificate. Wait, what? It's doing stuff. But- It's an HAPS cert and doing DNS. Yeah, it sets DNS locally. And now what I can do is I can get off of Tailskill. I'm no longer connected to Tailskill. And if I'm lucky in the demo with me, this should still work. And maybe not. Maybe. We're in limbo. We're in limbo. We're hoping that it works. Maybe Curl has a better look. Yeah, I'm just, so as you can see, it did do stuff. It did, it has a public IP. It did stuff. It got the certificate, and I'm sorry that it's not great. But you guys can right now go to this URL. You can see it on your phone, and it should work. You can hit his laptop. You can hit my laptop. My service is running on this cluster. So this is, obviously this is not, you would not expose an internal service like this, but you could use Tailskill to expose a service running basically anywhere in the world, either on Kubernetes or not on Kubernetes, and access it from anywhere in the world without doing any port forwarding without like, I'm on a public Wi-Fi in some conference, and it works. I can't kill the service yet, so people can hit it. Yeah, I'm trying not to. Yeah, so I think there's. That's not what I wanted. So again, we met some demos, so it was spinning up a service of a low balancer class type on Kubernetes, adding it to your Tailskill network, which then made it accessible to anybody else who was on your Tailskill network who has authentication encryption, all that goodness, just like you would want for an internal service. And then if you wanted to make it publicly accessible, like if you were trying to share a link with a partner of something in CICD that you were developing or that type of thing, you can just share, you can easily get DNS and HTTPS and then go from there. While we figure out how monitors work, because that's actually harder than what Messin just did. Apparently so, okay. You also typed really fast. Okay, where was my presentation? Yeah, you're good, I think. Well, not quite. I think I lost the slides. All right. Cool. Yeah, I'll make it happen. Oh, no, that didn't work either. Oh, I see, give me a sec. Drag it. Other way. Then I'm not gonna have to click the next slide, because I'm doing the wrong way. There we go, we have something. Uh-oh. Live demos are fun. Okay, give me a sec still. Aren't live demos fun? I love that the live demo is easier than fixing slides. All right, so we did a demo. Again, thanks for the demo with them. So we've covered all the options that we had and none of them provides all the properties that we were looking for, which is not shocking. Nonetheless, let's go look back at the options that we considered and how they stack up. And I'm sorry that the text is so tiny. There is too many options and too many criteria. So for the Kubernetes constructs, combining Kubernetes cluster services with load balancer gets you load balancing, that's it. It gives you inbound connectivity from outside the cluster to services running inside of it. Combining Kubernetes cluster services with Kubernetes ingress gives you encryption and load balancing. You can still, you know, you can route traffic into a cluster with TLS that terminates at ingress, which may or may not be inside the cluster. With Kubernetes network policy, you can restrict which services inside a cluster can communicate with which other services, including restricting a service to only accept traffic from ingress. Then for the set of options that are typically used for other kinds of trafficking Kubernetes, a service mesh does it everything or it can in some cases, depending on what you're trying to do. So why shouldn't you use a service mesh for the user to internal service use case? Well, you can, again, you would just need a user to install the service mesh agent on all user machines, which ends up looking a lot like a VPN, right? Abastion is a single point of entry to your network. It typically provides authentication, authorization, encryption, and auditability, but it sits on the public web and it might have some traffic management capabilities if that's something that you need. And then for protocols that are used to protect traffic that we talked about and are typically used as part of a VPN, IPsec gives you authentication and encryption, and IPsec-based VPN will typically provide you with authorization and auditability and may have some traffic management options. And similarly, WireGuard can be used anywhere that you're out of traffic, but only gives you authentication and encryption with authorization based on public keys. A WireGuard-based VPN will typically provide you with authorization and auditability and might have some traffic filtering. The main reason to favor WireGuard over IPsec in this scenario is just a simpler configuration, right? With better connectivity as the user changes between IPs, WireGuard-based VPNs with a coordination server rather than a, yeah, with a coordination server like TailSQL can also use natural reversal to allow you to route traffic to private IPs. So the option that you should choose for your application doesn't only depend on security properties, right? As I mentioned earlier, you also think about latency availability, usability, it's a huge reason to decide what to do for your internal users as well as ease of management of whatever solution you pick. But there isn't a clear winner. So I have this very, you know, every solution has its trade-offs. I know there's a very unsatisfying conclusion to this talk, but here we are. Nobody wins, but also nobody loses, which is just how that works out. All right, if you wanna learn more about some of the content we covered today, check out these links. There's a community documentation, a link to the operator that mess and demoed, and you can get a link to these slides. I will leave this here and then I'll put up the next slide, which is a QR code for some feedback. Great. Yeah, so TailScale has its own ACL system, so you can restrict access to, you can say like Maya can reach my laptop or I can reach her laptop, but no one else can. And the authentication is based on your identity provider. Yeah, so it'll be like Maya at tailscale.com or whatever her email is, is gonna be able to reach my laptop. Not your browser, but your machine. So there's a tailscale client that you need to install on your machine. Oh, I get it. And you'll log in to that, which will authenticate you to the, to using your IDP and credentials. Define... So like... Not yet. Not really. So you can write access controls based on users, groups, that includes groups from your identity provider, IP addresses, tags, which are kind of like a... Service account. Like a service account, that type of thing. So you might say something like, Maya can access finance machines or people in the dev team can access the production network. Yeah. You can also restrict based on port and protocol. Yes. Yeah. Not that I know of. I don't know about that, but I know WireGuard's padding is very well defined and very small. Like if WireGuard breaks, they have to redo, like if the underlying encryption breaks, they have a way of changing like a version number, but then like everything else, every single bit is used basically, right? There's no, there's nothing over, there's no overhead. I don't know about IP stack. Yeah, neither. Sorry. Yeah. Yes. Yes. So tailscale has a feature called tailscale SSH, which does basically that. So if you can install tailscale on your nodes, you can restrict who can access your nodes and as what user and, it runs its own SSH agent on the machine. So it gets a lot more front-grain access. You can, there's also a mode called check mode. So you can say like Maya needs to re-authenticate every four hours in order to reach the servers. You could also just run any SSH traffic over any either IP stack or WireGuard and that should work fine. But then you have to manage the client, the user named passwords or keys or sorts or whatever it happens to be. There's no reason that wouldn't work for both IP stack and WireGuard based solutions. I need more questions. Cool. If you have any feedback, there's a QR code.