 All right. We should be good to go, friends. How's everyone doing today? I'm glad to be here. This is my first open source summit, so very stoked. My name is Lee. Also, I'll be calling Nigel up here today later. And what we'll be talking about today is a little bit about login. Now, I'm from Colorado. I like parkour. And if you have met me in the community, it might be because I contribute to Kubernetes and to Flux. I'm a developer advocate with the Tanzu team. And I've been a dev advocate for about three years. But before that, I used to write a lot of software. And I used to write software that did operation stuff. So if there's any platform engineers or infrastructure people here, I share your pain and all of your experiences. And so let's talk about solving some problems, right? So this talk is about how sharing Kube configs and sharing credentials is not a really mindful way to do operations and to work with each other. Identity is a really important concept when it comes to doing shared computing to solve business problems or whatever your organizational mission is. And so we don't want to be in a situation where we're adopting new technology. We're doing cool stuff. We're innovating on how we iterate on our software. We have totally new things going on. We're like traffic shift in, playing with all cool stuff on this fancy infrastructure that's self-healing and doesn't page us in the middle of the night when a server stops for some reason. But then we're adopting this new technology. And we repeat our old mistakes. So I want to just kind of be a beacon for the community. And I think that in the open-source space right now in Cloud Native, we are building some really great solutions with some very open interfaces that has different tradeoffs. And I want to show you guys some cool solutions. So yeah, my name's Lee. I'll call up Nigel later. So the first thing that I just want to ask before we talk about Kubernetes and all of this other stuff with OpenID Connect and all these things, can I just get a little bit of a survey? Who's a Kubernetes user here? Pretty much the whole room, yeah. Some people are like maybe not participatory or they're just like, they like Docker sworn better. So how many people have integrated with some sort of authentication system before? Not authorization, which is kind of a different thing. It's about access. It's about login. Identity. So when I rushed in here late and I opened up my computer, the first thing I got to do is what? I got to touch my fingertip to this fingerprint reader and then the computer authenticates me. It's because it's really interesting knowing my identity because I like to write in my diary on this computer. It's very private information. And while I trust people with certain things, there's some things where it's just like I'm still trying to figure out who I am. I got private entries in there. And so that's kind of just for me. So even just in a one computer, single person instance, authentication identity is super important. So then you can imagine we've all been in that situation where we're starting to collaborate with our coworkers and we're like, oh man, I need access to that thing. We're doing cloud native software. That means that we have a lot of computers. It's almost too many computers to even be thinking about. We can only remember in our little brains, I've got like maybe this pool of servers over here and there's this whole complex system, but I'm going to just think about that as one thing because I'm trying to work on this completely other thing. Distributed systems. Oh, and I should probably page like Michelle over there, see if I could get like 30 minutes on her calendar because Jonathan and like the CEO, like they were asking for something and I just wasn't sure if it was going to be realistic to implement. And oh snap, I forgot. I told Larry that I was going to review his pull request after lunch and I didn't do that. And so we're doing all this context shifting and we're working with people and at the core of it, our systems need to know who we are. We need to learn and collaborate with each other because the problems that we're solving together as we're digitizing the whole world, they're problems that are bigger than any one person is capable of bringing the whole context to solve. We are collaborators, not just engineers. We build systems for people. And so we got to log into things. So I mean, we just got one computer and then now we're talking about many computers. What about Kubernetes? What is Kubernetes? Kubernetes is a distributed computer, but it's not a computer with an identity system, right? It has the ability to authorize things. It's got a whole API about access, about API groups and about verbs and who can do what. But it actually doesn't know who you are. It shifts out of the box like that by design because there are a lot of different interpretations about identity. And identity is the one place where in the cloud native world we need to be thinking about integrating with all sorts of systems because we don't want to be repeating the previous problems and duplicating credentials and having to get into this really nasty key distribution problem. So I want to talk a little bit about Pinniped. I had a slide over here that I actually accidentally deleted. And it talks about what comes out of the box inside of Kubernetes. You've got service account tokens, which a lot of people abuse and do all sorts of stuff. And then it also has kind of deeper in the infrastructure the ability to use a certificate authority to authorize and authenticate people. And that certificate authority, you know, you can print out mutual TLS certificates. But both of these things, I want to point out, as somebody who knows deeply about Kubernetes, that these are not things that are meant to be this yellow box up in the top left, right? Kubernetes, from an identity perspective, has some very lightweight options that are specifically engineered for bootstrapping Kubernetes components and running workloads and giving workloads identities. And so those things are not for people and identities that are outside the cluster. They're very scoped to the cluster. There's no such thing as certificate revocation inside of Kubernetes. Service accounts are not just some anonymous ID. They're actually intended to be mounted into pods automatically. So there's a bunch of attack surface there. They might be a good way to get started if you have very few clusters. But if you're using certificate off and you're using service account tokens and you're handing out kube configs with secrets inside of them to people, and as your cluster list grows and as people come and leave and you cannot revoke these tokens or these certificates, which have a lifetime expiry date, then you're going to be in a kind of suboptimal situation. You're going to have to bolt some stuff on tops so that you can build these identity provider features. So when talking about Pinniped, Pinniped is an open source project that has two primary components that we can think about. So anytime I spin up a whole multiple Kubernetes system, several distributed computers in order to solve my business problems across the world, I'm going to be thinking, how can I integrate those computers with my identity provider? A lot of clouds have ways to do this. A lot of clouds are very willing to be your identity provider because if all of your access is glued to the identities of the person that you purchase infrastructure from, then you will never leave that person. It's called lock-in. But when we want to work across multiple computers, we're going to need this concept of identity, and that's so that when you have your developers, if your manager is interested in knowing what time you pushed out an update and doesn't want to ping you for stuff, we want to expose the information, the very valuable information from all of these different APIs inside of Kubernetes, contextually based off of access to the people with specific identities and memberships to groups. So with Pinniped, we can do that by deploying two components to a management cluster and then a single component to our federated clusters. And these lines over here, they're not network connectivity, it's just referential. So in the management cluster, the Pinniped supervisor does need to talk to the identity provider, but from a networking perspective, but from the federated cluster perspective, the concierge does not need to talk to the supervisor. So why do we have all these components and why are we deploying them everywhere? We have this fleet of distributed computers, and we want to achieve a couple of things. We want to achieve integrity of our tokens so that they are not bleeding across different boundaries in the federation domain, and we want easy access that's tied to the identity of the thing in the top left, it's the yellow box identity provider. So I'm going to show you just where you can go and get this piece of software already. I'm going to need another Chrome window for that. Go ahead, and here I'm picking my identity to just browse. Oh, I already have that, so I need to do this. Cool. I just go to Pinniped.dev and I need to be on the Wi-Fi or else everything is going to break. Sorry, friends, I could have been a little bit more prepared. That should work now. Pinniped.dev. So this is an open source project, right? Everything that we're talking about at mostly at this conference is open source. Batteries included, Kubernetes authentication, simple, frictionless, seamless. Why are we talking about ease of installation, ease of integration? It's because in order to integrate with Kubernetes, you need to influence the way that the API server interprets identities. That often means having control of the infrastructure of the API server of your cluster. But that's great until you want to start deploying geo-distributed services. I want a footprint. I want to be able to serve my users in Bangalore. I want to be able to serve my users in Seattle and then also have a presence in Germany. There's no one place or one cloud necessarily where I can go and purchase Kubernetes clusters or maybe I don't have the personnel, the data centers in those places in order to serve those geographies. Then I maybe need to use a Kubernetes as a service provider and that company is going to host my control plane. They may not give me control. Even if they do give me control over the way that infrastructure of the API server is configured, it may be a completely different interface and with different rules, maybe different product features than how I set up a bare metal cluster, how I set up a GKE cluster, an AKS cluster, an EKS cluster on Amazon. It's this really ugly problem. Now we have all this friction if we're deploying these Kubernetes clusters everywhere. It's supposed to be open source software with open interfaces. What's really cool about Pinupede is that you are able to integrate with any Kubernetes cluster without modifying the API server. This is a very, very unique feature of a authentication provider for Kubernetes that does not use all of the raw stuff underneath Kubernetes exposed to the user. I'm going to demonstrate what's cool about this. We have the supervisor, we have the concierge deployed to multiple clusters. There doesn't need to be network connectivity between the federated clusters. Then we just install the Pinupede CLI on all of the endpoints and clients that actually want to talk to Kubernetes. That means that if you could ask IT to roll out Kube CTL and Pinupede to your manager's laptop, your manager can check on your work all day. Or maybe you could use a UI like Octint or Lens. This is a really cool architecture and it's secure and safe. That means that when I exec Kube CTL, I'll show you one of these Kube configs. I'm catting out a Kube config on a terminal. The cool thing is nothing in here is secret. This is like a certificate authority bundle. We've got a couple of bits of information about how to reach the cluster, what contexts and things to use. Then there's this args and command section of this Kube config. The command is using Pinupede. Then it's calling login OIDC using the concierge and then asking for a jot token, making sure to validate TLS. Then it has a cluster audience right here. Here's this unique identifier for the cluster that this Kube config is tied to. This is a completely safe credential. I can use this and it will go and talk to the concierge. The concierge will go talk to the supervisor centrally, make sure that it'll use an open ID connect login flow, and then I will get a token for my cluster. I'm just going to remove the... Well, I'll just do it. For instance, in the way that I'm hooked up, let's connect to the management cluster, if I say Pinupede, who am I? Right now, I am using the admin Kube config because I am the personal person who made this cluster from this machine, so I happen to have my hands on the administrator Kube config. This is the kind of credential that you would normally use to access a cluster. Just right out of the box, this underneath, this is Tanzu Community Edition cluster provision with cluster API in Kube. You get this certificate, it does MTLS off with the API server. Now, I am this particular username and I have these groups. This is a group that everybody gets and this is a group that gives you a role binding, a cluster role binding to the cluster admin cluster role. If I were to use that special Kube config that I just showed you and then I say Pinupede, who am I? You'll see something cool happen. Assuming my network connectivity is up, yeah, cool. It is going and it is authenticating with GitHub. Then it's telling me I'm logged in succeeded, I'm allowed to close this tab. When I go back to the terminal, it says log in by visiting this link, it opened my browser for me automatically, optionally pasting the auth code, it didn't have to do that because, again, it opened the browser automatically and then it tells me that it's accessing this cluster using the normal API server that's been unmodified and that I'm this particular username, which is my GitHub email and then I'm a member of this GitHub organization, this GitHub organizations Kube admin team, this GitHub organizations Kube viewers team and a bunch of other stuff. Now, what's really cool about this is this is a tons of community addition cluster API, Kube-DM provision cluster. It's pretty much an upstream Kubernetes cluster deployed to some random Amazon EC2 nodes. It's not integrated deeply with the Amazon cloud APIs. It's not modifying Kubernetes in any way. I have Pinupede supervisor and concierge deployed to this management cluster and I'm able to talk to it in a way where it knows exactly who I am. For instance, if I use the admin Kube config to say get the nodes, I can do that. But as soon as I use that Pinupede Kube config, it says that I'm not allowed to do things anymore. So now I have a new principle. I'm no longer administrator. I've dropped all privileges except for what's normally given to authenticated users and this is something that a cluster management team, a team of people who have access to the cluster, can use to start allowing access to individual people from different groups. And the cool thing is that if I create a new cluster and I just install only the concierge to that cluster, then I'm able to use this exact same login flow with the exact same session token that's on my laptop. But then I'm going to get unique credentials for this brand new cluster that I install software to. So very powerful and not using any technologies that are coupled to a particular cloud provider or a particular way of installing, managing Kubernetes components. It's completely decoupled. So Lee, how does it work then? Well, if I do a Pinupede get Kube config and then I just I'll just write this to DevNol. Because I'm only interested in the debug output. Well, right here, this is talking to the concierge inside of the AWS management cluster that I have. And the concierge is telling me, oh, you have this credential issuer. And the concierge is operating in token credential request API mode. And that means that the concierge did some digging inside of the Kubernetes cluster and did some digging with how much access it had. And what actually is happening here is the concierge is minting short term X 509 certificates on behalf of an authorized by my OIDC claim. So I have these OIDC providers. I've got the concierge and the supervisor linked up. They're all talking to each other. And once Pinupede figures out who I am, the concierge on my behalf goes to Kubernetes and mints some credentials for me that actually let me talk to the real API server. So there's no proxies or any impersonation involved. I'm getting a credential that gives me temporary token access to the API. Now, if that's not available, there's different circumstances, you know, all the kinds of surprises can happen when you're deploying different kinds of Kubernetes clusters. Pinupede has a fallback mode that you can also just enable explicitly, which is this impersonation proxy. So instead of going to the API server directly, you can also use the Pinupede concierge as a intermediary. And you can make sure that you deploy it in a way that fits your security model. And then all of your API requests will instead proxy through the concierge as an optional deployment mode. And then that allows the concierge to use impersonation instead of the token request API, which is sometimes an easier way to support things when you're deploying it. So I made a kind cluster. It doesn't get more boring than this. These days, it's really easy to get plain Kubernetes. There's no cloud provider integration or anything. And let's look into my control repo. So in this control repo, I have a workload folder. Make this just a little bit bigger. That's probably too big. And there's a build of a Carvel project that I'm just using to manage my config. This auth infra is just a YAML file that you can get off the internet. I didn't write this. It's just the install of the concierge with the custom resource definitions and deployments and all the things in Kube public and Kube system that you need to install a basic concierge installation. No changed options or anything. And then this is the configuration that I need to add to a federated cluster. There's nothing secret in here. I just tell the federated cluster. Honestly, this could be a config map. It's just nicer as a custom resource. The JWT authenticator just tells the pinniped command line tool. Where is the issuer for the supervisor or whatever OIDC server I need? And then what is the audience of this cluster that I should request tokens on behalf of? And that audience being unique per cluster is important because that is the kind of like those are the bits that ultimately produce cluster unique tokens. So right. So this very simple configuration. I'm just gonna do I'll use cap just to make this a little bit easier, but you could do Kube CTL apply just as well. I think I've done this before actually. Yeah. Cap deploy. Config worth loads. So I will just deploy the infrastructure. So this is what a concierge deployment looks like. There is some cluster scope resources. They have the CRDs, a couple of different basics for cluster roll binding so that pinniped is able to pivot and do some secure things for you in order to manage identities intermediarily on your behalf. And then there's, you know, like just the infrastructure that's needed service accounts, roll bindings, they're say, you know, networking. So I'm just gonna say, yep, this is all security reviewed and actually part of real products that you can pay for. But all of this stuff is open source. And then yeah, a cap right here is just a little bit different than Kube CTL. It's just waiting on all of the resources to actually tell me that things are deployed properly. So I'm again, I'm deploying the concierge to my kind cluster. And yeah, I mean, there's nothing, there's no special networking or any storage being used. It's just a very lightweight shim that we're putting in there and it's got a downloaded container. I haven't tried this on conference wifi. Actually, this might be a bad idea. Cool, there we go concierge is deployed. And then I will different command that I ran earlier. So now that I have the concierge deployed, I'm just going to add that JWT authenticator custom resource to the cluster. And so this is just at the cluster scope, adding a jot off. It's going to make this resource. There you go. So again, if I ask pinniped, who who I've been to this point, I've been operating as the cluster admin using a mtls backed X5 on insert talking to the kind cluster. And if I instead do a pinniped get kube config, and then I overwrite whatever was there before, this is could not find a healthy agent pod. Oh, it looks healthy enough to me. I've never seen this error before. There we go. I must have not waited long enough. Cool. So here again, we're seeing concierge operating in token request API mode. And then I am able to say kube config. And now the kind cluster that I just deployed is going to go do some OIDC claims, make sure that I have my session updated. And then it knows exactly who I am. So even from an individual laptop perspective, right, I logged into this laptop, I own the compute. And so like, why is it so important that I'm logging into kind? Well, what you see actually in a lot of developer environments is that there will be like some shared virtual machine. And then somebody just really wants a quick way to get kubernetes and they run kind and explodes a load balancer on like all interfaces on the host just using some port forwarding stuff. And this is like a really convenient way to add some real identity to a development environment that's super quick and fast. Right. So very when we look at that, the promise that's here. And you're talking about anything that's open ID connect, anything that's talking about authentication, about TLS and all of the other mess of things that you need. And then also to deploy the supervisor, like you have to have like secure ingress stack with like it's got to be on a network that everybody can reach, which is probably internet unless you have a VPN. And like if you have been down this road, this is not easy and frictionless. Right. This page, you're doubtful, you're looking at it, you're cynical. This is, this is, there's no way that this is true. But once you got to that management cluster setup, once you have the supervisor hooked up to your identity provider, and I've just been, I've been saying OIDC, but we support a bunch of things in Pinnipet. I mean, like there's an LDAP connector in there, you can hook it up to Active Directory. That when you got that setup, now you have the ability to federate an authentication domain across any kubernetes cluster, regardless of who is providing it, regardless of how you created it, how you manage the API server, how long it's going to be around. It's easy enough and frictionless enough to, where if you make that investment to going back to the picture, if you do this and you can get the Pinnipet CLI on all of the clients, then scaling out federated authentication to, in a secure way, that's really nice and can tie into something like a GitHub identity, or like your Gmail account, or Google Workspace, or something from Active Directory, which like every enterprise in like the Fortune 500 has an Active Directory. Now you have a frictionless way to tie those identities into Kubernetes without doing really gross things, like minting a bunch of certificates that duplicate identities from elsewhere and having to distribute those kubernetes in a secure way using vault or a password manager or whatever. That kube config, it's got no secrets in it and every developer can use it to bring their own identity, which means that you can just check it into a Git repo. Like it is beautiful. Really nice synergy with GitOps and super fun to use. So that's sort of the meat of the demo. Some fun facts about this is if we go and like actually log back into the management cluster and then I'm using Flux here. These are the components that were ultimately necessary for me to kind of productionalize using a regular AWS cluster API provisioned cluster. The ability to deploy all of the TLS stack and the secure ingress necessary to kind of have OIDC end to end. And inside of here as well, there is another component that you can find in the cloud native ecosystem called DeX that does some similar things, has some similar overlapping goals to pinuped. And the only reason that DeX is in there is because it knows how to talk to GitHub's esoteric OAuth 2 implementation. So it just shimmed it in as an OIDC server that could talk to GitHub because GitHub doesn't implement OIDC. So in the final picture where we have the identity provider there, it's just in between GitHub and the pinuped supervisor. There is DeX in this particular demo. Pretty fun to hack on. And before we get into any other details and questions and stuff, I want to invite Nigel to come up and just talk a little bit about how this is really cool software that's not just something that we built for our product, VMWare necessarily, but that we intended to build in an open community way. So Nigel's Austin local here. If you're Austin local, go hit them up. And yeah, you can find them on Twitter. You also work with Contour. I do, yeah, yeah, that's correct. So hey everyone, my name's Nigel. I am a community manager at VMWare. I manage the communities for the open source projects, Contour and Pinuped. We're here talking about Pinuped today. So I wanted to invite you all to get involved with us. Pinuped's community is growing and we actively want you to be a part of it. This is an open source conference and open source projects don't work without communities. Community is important to us because, well, for me personally, I think that community is a necessity of the human condition and like our technology is just an excuse for us to find a place to belong and feel like we're contributing and feel like we're affecting the outcome of these projects. And regardless of where you are in your Pinuped journey, we have our governance document in the repo that describes the different roles from users to contributors to maintainers. Even if you're not actively using Pinuped, you're using some other solution, it's still helpful for us to hear your voice, letting us know why you choose Pinuped or why you don't or tell us about your use cases or come to us and get support, figure out how you can do all these cool things that Lee's been doing today with your own clusters. And we are actively soliciting your input for how we can help you be successful with the use of this project. My entire job at VMware is to create the conditions for a community to grow. I am here for you and to make sure that everything that you want to do, you're able to do with us. So I wanted to tell you a little bit about how you can find us. The Pinuped.dev website that Lee went to earlier has a community tab, has all of this information here, but I also wanted to put up here for you to see. We have our community meetings on Thursdays on the first and third Thursdays of the month. Community meetings are your opportunity to meet synchronously with our maintainers and with our other community members to learn about what's going on. And we invite you to join us every first and third Thursday at noon Eastern 9am Pacific. We have a Google group for email communication. So all of our announcements are requests for comments. We have a few PRs open right now that we're actively looking for folks to comment and tell us what they think or your contributions. You can find all that stuff on the GitHub page, but also we'll send updates out on the Google group. You can find us in the Kubernetes Slack workspace, just hashtag Pinuped and then interact with us on Twitter at Project Pinuped. We're here for you. We want to hear you. So please come join us. Thank you. Thanks, Nigel. Yeah, again, hit Nigel up. He's here and this whole control repo that actually is responsible for setting up the infrastructure is fully reproducible. I have destroyed and recreated this cluster many times with older versions of the Tanzu CLI making different AWS clusters, updating packages. You can read the history of this control repo. It's actually in a GitHub organization and if I actually go to that org, Stealthytale. So I'm Stealthy Box on GitHub and I made this organization to start playing with some stuff in tail scale and figured, oh, this is a cool place for me to play with like OAuth apps. So in GitHub, like if you want to set up an OAuth app, you just go to your organization or your personal settings. And then I think you'd like go here and no, sorry, it is it's under developer settings under this and then are under OAuth apps. Actually, there's so many buttons. Yeah. So right here, like this is the OAuth app that I made for this demo. You go in here, you click this create button, you put some info in here. This is where I configured the call back to DeX, you know, it's a couple of fields. And if you like know what to put in here, it's not that big of a deal. But sometimes just when you're setting up this kind of stuff, it's just a lot of research, right? Like, does this need us trailing slash or not kind of thing. And if you're really proud of yourself, you can upload your organizational logo here. But once you make your app, you know, you just generate this clients key client secret has an ID and secret combo. And then in my case, the secrets management for my repository is just in my repository. It's encrypted with SOPS. And I just have a values file for the management cluster here, where we update that. So here's the encrypted client secret, right? SOPS edit, you can see the decrypted version. The, you know, my private key is not committed to the repo. So that's like the kind of thing that you would. Anyway, that's about get off secrets management. I'm a get off nerd. But yeah, the whole point is this control repo, you can find it there. Go take a look at the code. There's a bunch of good stuff in there. I think I'm using like three different package managers. There's like vendor, which is just like plain YAML, vended into my repo, using a Carval tool. There's a YTT project in there. And then there's also flux with a helm install or a helm release object and a bunch of customizations. And it's kind of gnarly, but that's the kind of thing that you need to do when you actually want to get organized on Kubernetes without introducing a bunch of entropy. And this is the kind of control repository that could scale to managing many different pieces of infrastructure with multiple teams via pull request, using your identities for when things go wrong, that you could take those identities to Kubernetes and have access to the things you need. So that's kind of the vision. And I think I have like a couple of minutes for questions. Yeah. So we just have to repeat the questions for everyone online. Yeah, for sure. I'm standing in front of the speaker. Does anybody have any questions either about Lee's amazing demo or about the community? Yes, please. So why Jordan? Kublogin. So Kublogin would require the question. Yeah. The question is why would you use pinniped over something like Kublogin? And Kublogin is an example of actually, is this the Rancher project or the open source project? One second. Let me get a refresher in Kublogin, because I don't want to say the wrong information. There's a couple of different shapes of things. So this is an OIDC authenticator. And I believe you would need to modify the API server would be my guests here. Connect. Yeah. So the reason that you would use pinniped over this is because you don't have to modify the cluster. So I don't have to go in this example. They're using GKE. I don't have to go into the GKE control plane options and do specific GKE things to configure my identity provider. So that's probably the biggest advantage of using something like pinniped. Also, we support more than just OIDC. So I don't know if the Kublogin project supports things like LDAP, per se. I know it's a very popular option, but you could use pinniped in the same exact way, but then just configure an LDAP connector instead of OIDC. Great question. Do we have anyone else? Everyone's good. You understood everything perfectly. But Lee's got all of his information up there or had all those information up there. If you have any questions and we're going to be around as well. Yeah. Yeah. I'm always happy to chat again. Would love to connect with you. Here's some stories about what you're doing. That's my Twitter. That's my GitHub. And thanks for for coming. And I'm learning with us. Oh, sorry. One more question. In this case, so the question is this works with really well with a Tanzu style cluster API inspired architecture, where you have a management cluster and worker clusters. And it does mirror that, but none of this needs to be any Tanzu Kubernetes underneath. You don't need any cluster API underneath. The clusters don't need to know about each other. I could deploy five different kind clusters on a bunch of different virtual machines in different clouds. And you just kind of pick one. That one is your supervisor. You host an endpoint anywhere where all of your clients can be. And then you don't have to do all of the other messy authentication glue in order to hook up with your identity provider on the rest of them. But then you just install concierge point them all to the central place. And then you have off that works. This does beg the question, which is like, what about failure domains and that kind of stuff? And with authentication, we're always that's always a really important concept, right? So that central supervisor endpoint, the failure model of whatever that is, is quite important. And it might be one of the reasons why you choose a different solution. If like, you don't have the operational expertise or the services available to you to ensure the availability of that supervisor endpoint. There's no, sorry, I don't I understand. Oh, voting. Yeah, the all of the kind of like leader election and resiliency and like cross AZ type stuff. You could just delegate to Kubernetes for that. So pinniped doesn't have to do anything inside of the project. You could just host a really resilient management cluster that doesn't host workloads that doesn't have ingress traffic from the internet or anything like that. It's just for authentication services and whatever other central things. It's also a good place to probably put like a secret store like Vault, you know, because this is authentication, right? So it's like, it's secure infrastructure that you want nothing else touching. And then that way you could also then have a multi AZ cluster. The costs are not going to be very high. Fail over is going to be based off of the fact that that multi AZ cluster is very resilient and backed by at CD. That's a great question. Thanks for the question. Cool. Well, I'm super happy to talk and won't steal much more time for you. I think on the schedule right now is a coffee break. So see you out there. Thanks, everyone.