 Hey, everyone. Today, I'll be talking to you about Pin-A-Ped, the unified framework for user authentication to Kubernetes clusters. So today, I'm joined by Anjali, the PM for the project, myself, one of the engineers on the Pin-A-Ped team. So let's imagine that you use Kubernetes, and you've started to get used to using cubes to tell, and I'm really enjoying it. And it's now time to start deploying it for real in your environment, where you happen to use Active Directory to provide identities for your users. It seems like a fairly common thing to want to do. So let's think about how you might imagine doing that. You might start off with a Google search. And it appears that the first result for the most obvious Google search is a release blog post. So let's see where that takes us. Now, if you use Qtl and Kubernetes for a while, you'd imagine it would be a series of Qtl apply commands. Instead, what you encounter is a very complex set of configuration using two components, Dex and Gangway, that is unexpected. There's a lot of YAML, a lot of instruction, and it's not really clear what you're getting yourself into here. So maybe let's back up. Maybe that was a bad search result. Let's back up and see what the second thing appears to be some official Kubernetes documentation. Maybe that can help us find a way to use Active Directory for Kubernetes. And so we looked through the official authentication documents for Kubernetes, and they refer to client certificates, tokens, and proxies. Nothing specific about using Active Directory. And so here we start learning some of the problems. Kubernetes is very pluggable. It does not have direct integration with Active Directory. You're kind of on your own on that. So this is beneficial if you want to build custom integrations, but not so fun if you just want to get some work done. So maybe let's head back to the blog post and see what it entails. So we have to deploy two components, Dex and Gangway in concert. We have to consider various CLI flags on the API server. We have to learn some OAuth semantics, have some very specific coordinated state between these components. And the reality is these components were not built together. They're open source projects, important in their own right. And certainly there's not necessarily anything wrong with this setup. It's just not curated. And it can be difficult to understand what's happening. It's really, really per single cluster. And it doesn't really scale out. You can imagine a ton of extra work to add these components to every single cluster you use. And it doesn't necessarily lead to the most convenient or secure deployment. So independent attempts to solve these types of problems by providing a much more Kubernetes-native approach to authentication via a common provider such as OIDC and AD. And it allows you to configure these at runtime. It is an open source project that you can use on any Kubernetes distribution. So you want to start installing Pinnipad. The first steps are really easy. You just Qtl apply the manifest and install the two core components, which are the Pinnipad Supervisor and the Pinnipad Concierge. The Pinnipad Supervisor is just a web server. So it will require you to configure the ingress and TLS for it. Here's an example of how we've done it on a GKE cluster with CertManager and Let's Encrypt. So we start off by creating load balancer as a service for the supervisor. Next, we configure Google Cloud DNS to point to the service. We install CertManager. And then we request certificates from CertManager for the particular host name that you intend to use. So we finally come to the configuration steps that are more Pinnipad-specific. So all previous steps that you saw were just configuring the web server. And you've probably done that for other applications that you may be using. So the first core step is to create a federation domain and configure it with the issuer URL that you created in the previous steps. So federation domain, what this means or what this entails is, in this case, the issuer, for example, supervisor.mycompany.com, this represents the set of Kubernetes clusters that are going to trust this particular Pinnipad supervisor. Now let's go to Active Directory configuration. Well, it's as easy as you can imagine. You create an Active Directory custom resource. You point it at the host name of your Active Directory server, and you provide it with bind credentials. So that's it. There are other configuration options that you can do, for example, user search and group search with custom attributes. But the default configuration that we provide is very well curated for most of the Active Directory deployments. OK, so now the next step is to configure the Pinnipad configuration. So we wanted to trust the Pinnipad supervisor as an identity issuer. And this is going to be for a specific cluster audience. In this case, it is the dev cluster. So this is likely going to be used by developers. So now you're ready to get the Kube config and distribute this to your developers. Notice here that there are no credentials in the Kube config, so it is safe to distribute to users. Now your developers can take that Kube config and start accessing the cluster with Kube CTL commands. Here's an example of a developer that may be using a Kube CTL get namespaces command on the CLI. They get prompted for username and password because, of course, we've done Active Directory configuration, so they get prompted for the username and password here. And once they are successfully logged in, then they can see the namespaces on the cluster. Now your developers may want to send more Kube CTL commands to the cluster, but, of course, they are not going to be prompted again for username and password because all of their credentials are cached. So we provide some helpful commands. For example, the Who Am I command that helps you understand how you are logged into the cluster. Now in the previous example, you had put in Pinnipad as the username, but Kubernetes will display the full user principle name. The flexibility that we offer to the IT administrator is to configure the username however they want. For example, using SAM account name, user principle name, or the mail attribute. Also, by default, we give you all of the direct and nested groups. You can easily change and customize this based on your needs, and we provide ample documentation examples to support this. OK, so you logged into one cluster with the supervisor and concierge. Now you may want to add another cluster and you may want to give access to another cluster to your user. Well, it's as simple as installing the concierge on the second cluster with just two Kube CTL commands and then configuring the JOT authenticator to trust the supervisor and give it a unique audience. So we don't need to install supervisor or any other configuration again. So now you can get Kube config for the second cluster and pass it to your users and developers. Also, users don't get prompted a second time as the credentials are safely cached and can be used across the clusters as long as they are part of the same federation domain. So to recap, PINF allows you to add and remove identity providers at runtime using standard programming resources. It supports multiple different types of identity providers. It allows easy logging across many clusters with single sign-on support and, of course, it's open source. So looking to the future, we're looking to add multiple IDP support. If folks are interested in this community interest, we would like to add Kerberos support for Active Directory so you no longer even have to type in your password. Additional IDP types like GitHub and Google could be implemented. And we're looking at various very hardening efforts, such as more frequent and automatic rotation of all sign-in keys. And we're really looking to have community members provide input on our roadmap so we can prioritize things that people are looking to have in the Kubernetes environments. We welcome your feedback. We look forward to working with you. You can find us in the Kubernetes Slack as well as on GitHub. Everything here is Apache licensed 2.0, just like the rest of the Kubernetes ecosystem. And we look forward to working with you. Thank you.