 Okay. Thank you for joining us. Welcome to today's CNCF live webinar, Easy Secure Kubernetes Authentication with Peniped. I'm Libby Schultz and I'll be monitoring your webinar today. I'd like to introduce our speakers, Matt Moyer and Margo Crawford, both software engineers at VMware. A few housekeeping items before we get started. During the webinar, you're not able to speak as an attendee, but there is a chat box on the right side of your screen for you to speak up and ask questions. Feel free to drop them there and we'll get to as many as we can. In addition, please also join our CNCF public Slack channel that I posted in the chat, hashtag the CNCF dash online dash programs to continue your conversations later and address any questions we might not get to during the webinar. This is an official webinar of the CNCF and as such is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct and please be respectful of all of your fellow participants and presenters. Please also note the recording and slides will be posted later today to the online programs page at community.cncf.io under online programs. They are also available via your registration link and the recording will also be available on our online programs YouTube playlist on the CNCF channel. With that, I will hand it over to Matt and Margot to kick off today's presentation. Thanks Libby. I'm going to share a pre-recorded version of part of our presentation here and then we're going to have plenty of time for questions afterwards. While this is playing, one of the benefits is that Margot and I, while we're presenting in the video, will also be answering questions in chat so feel free to drop questions as we go. We'll probably answer some inline and save some for the end to answer in person. Good morning. Good afternoon. Good evening. My name is Matt Moyer. I'm here to talk to you today about Pinniped. I'm here with Margot Crawford. We are engineers at VMware on the Pinniped team and today we're going to talk about what the problem is with Kubernetes authentication as it stands, what we built in Pinniped, how it works, how you can use it to enable smooth authentication on your Kubernetes clusters and then go into some time for questions. There's really two gaps in the Kubernetes authentication user experience that we set up to solve with Pinniped. The first is that even though Kubernetes auth is very extensible and there's lots of options, they're mostly all configured with CLI flags on the Kubernetes API server. This means that at best, you'll have to restart the API server any time you want to change one of them. At worst, it means that you won't even have access to those flags because they're managed by your cloud provider. The other gap is that even though Kubernetes has all these options, it doesn't come with any opinionated login flow. It doesn't come with an end-to-end way to take an external identity provider and provide a way for users to log into a cluster. It just gives you the tools that you can use to build one of those by yourself. Pinniped takes the options that are existing Kubernetes, extends them to have a dynamically reconfigurable and end-to-end out of the box login flow. It's the batteries included for your Kubernetes auth experience. What is Pinniped? Pinniped is an open-source project. We've been building it for a bit over a year now. It enables dynamic configuration of Kubernetes authentication. This means that you can install it onto any existing running Kubernetes cluster. Then you can reconfigure it to add or remove different authenticators at runtime. It provides a better login user experience for KOOP CTL. You have a KOOP config that doesn't have any hard-coded secrets. You can easily connect with OpenID and LDAP identity providers. You can have an experience that spans across multiple clusters. You can have a login once in the morning to your OIDC identity provider. Then for the rest of that day, you're logged in transparently to all your Kubernetes clusters. Even if you have 10, even if you have 100 or 1000, everything just works throughout your day. Pinniped is a project, but it works with any Kubernetes cluster. We ship Pinniped as a core component of some of our commercial products, but it was important for our use cases that it works with any existing Kubernetes cluster that a customer has. Even if your cluster is running on Google GKE, Azure AKS, Amazon EKS, or any other commercial distribution of Kubernetes, Pinniped should work great. As a cluster administrator, once you've installed Pinniped, you'll have KOOP configs for your users that are super easy to use. All the Pinniped configuration happens via Kubernetes native APIs. If you have a declarative GitOps deployment pipeline, you can use that to manage Pinniped as well. When we built Pinniped, we envisioned a common deployment scenario, something like this. On the left, you have an admin user that installs and operates some Kubernetes clusters. They have access to what we call an admin KOOP config that they got when they created each cluster. This is typically a super powerful KOOP config with some hard-coded secret key, and it does not identify any individual user. Usually, it encodes the system colon master's group so that it even bypasses all our back on the cluster. It's dangerous to keep this around, but it's usually necessary to bootstrap the rest of the system and as a fail-safe. On the right, you have regular users that just want to log into the cluster and deploy applications. These might be developers, for example. In a typical organization, we expect that even the admin users don't use their admin level access on a daily basis, just when they need to perform low-level operations on the cluster infrastructure. In Pinniped, the admin user is responsible for installing and configuring Pinniped. They can set up logins via their enterprise identity provider, such as Active Directory or something like Okta, and then they can generate a KOOP config for regular users to use. These Pinniped-based KOOP configs are somewhat special. They don't contain any credentials for accessing the cluster and they're not user-specific. Instead, they just describe how to connect to the cluster. All users can download and use the exact same KOOP config file, but when they log in, they'll have their correct individual username and groups from the external identity provider. And as an admin, this means it's easy to manage access to the cluster via Kubernetes RBAC roll bindings. Pinniped has a few different components that can be deployed independent of each other. First up, we have the concierge. This can be deployed on any cluster. It takes an OIDC token and translates it into something that the Kubernetes cluster can process in one of two ways depending on your cluster architecture. One is creating an X509 certificate that is signed and therefore trusted by the cluster. The other is forwarding requests via an impersonation proxy on behalf of the user. Next up, we have the supervisor, which is typically deployed once on a very trusted cluster. It's an OIDC server that allows users to authenticate with an external OIDC or LDAP provider and possibly other identity providers in the future and issues its own tokens based on user information from the IDP that can be used by the Kubernetes clusters. The Pinniped CLI is used to generate the KOOP config that can be used by users. And behind the scenes, it's also working to make the login experience seamless when users run KOOP cutoff commands. One thing we try to ensure with Pinniped is that tokens are not replayable. That is, if one cluster is compromised for any reason, that won't mean that users' tokens can be stolen from that cluster and reused on any other cluster that also uses Pinniped. We do pass a token with user information to each cluster, but we make them unique tokens from each other by changing the audience via an RFC 8693 token exchange. This happens behind the scenes without the user having to log into each Kubernetes cluster independently, so users can still log in once per day to access all of their clusters while being secured. Now we'll take a look at this architecture diagram for a Pinniped deployment, in this case one where the Kubernetes control plane is accessible. This is usually the case for self-posted clusters. On the user's first KOOP cutoff command, the Pinniped login is triggered via the Kubernetes exec login, and the CLI requests a federated login via the supervisor. The Pinniped supervisor then prompts the user to log into their external IDP, and using the information from the token or attributes returned from the external IDP login, the supervisor mincing OIDC token to pass back to the CLI. The CLI turns around and requests a new second token from the supervisor with the same information, but with a cluster-specific audience, which the supervisor mints and passes back. Next, the CLI will create a credential request to the Pinniped concierge's aggregated API using the token it just received. The concierge uses the cluster's signing key from the Kubernetes control plane to create a short-lived certificate for the cluster. Subsequent Kubernetes API requests will use the short-lived certificate refreshing it as needed. Next, we'll take a look at how the architecture changes when the Kubernetes control plane is not accessible, where we use an impersonation proxy. This is the case on many cloud providers Kubernetes distributions like GKE, EKS, or AKS. The first steps are the same as before. The initial login happens to the supervisor, which mints cluster-specific tokens after login to the external IDP. The Pinniped CLI will still create a credential request to the Pinniped concierge's aggregated API. However, the concierge will, instead of minting certs that are using the cluster's signing key, issue certs using its own keys that are not automatically trusted by the cluster. Requests will include the certificate and be passed through the Pinniped concierge impersonation proxy, which uses the cert to construct impersonation headers so it can pass along the request to the Kubernetes API server on behalf of the user. All right, now it's time for a demo of how to set up and use all the different components of Pinniped. Today, we're going to be learning how to set up Pinniped, and as I go through this demo, I'll be using lots of examples from our website at Pinniped.dev. If you ever want to dive into any more details, you can also, of course, find us on GitHub at vmware-tanzu-piniped. In the first part, we're going to be installing the Pinniped concierge on a cluster. We'll start by installing some local command line tools, the Kubernetes CLI, kind for making a local cluster, cap from the Carvel tool suite, and the Pinniped CLI. We can install all of these quickly with Homebrew on macOS. Next, we're going to use kind to create a locally running Kubernetes cluster. We've asked kind to output the initial kube config to a file called kind-admin.yaml. This is what we typically call an admin kube config in Pinniped. It's the way that you bootstaff access into a cluster and a good failsafe to keep around, but it contains a hard-coded secret key credential. So it can't easily be shared with your teammates. Now that our cluster is online, we can use cap to install the concierge. You can see that the concierge consists of a bunch of Kubernetes resources. Now that it's up and running, we can see that there are some new pods in the Pinniped concierge namespace, and there are a number of new Kubernetes APIs that we can use to configure and interact with the concierge. One of those is credential issuer, which lets us inspect the current status of the concierge on this cluster. You won't normally need this information, but this is how the Pinniped command line knows how to connect to the concierge. Now that the concierge is installed on our cluster, we can configure it to use an open ID connect provider for authentication. In this demo, we've chosen GitLab because it's free and easy for anyone to get started with, but you could also use Octa, PingIdentity, Azure AD, ADFS, or any other OIDC provider. To start, we'll need to go into GitLab and register a new OIDC client, which GitLab calls an application. We'll give our application a name, unmark the confidential box because this is a public client, set our redirect URI to match the required settings for the Pinniped CLI, and ask for the open ID and email scopes. Once that's created, we'll have a client ID, which we can copy. We configure the concierge for GitLab using the Jot Authenticator custom resource. We've already filled out the rest of this object, but we need to set the audience to match the client ID we just generated. Once we apply this object to our cluster, we can use the GitCoupConfig command in the Pinniped CLI to generate a Pinniped-based CoupConfig. In this case, we also need to set a few options because we're using GitLab directly with the CLI. If we take a look at our new CoupConfig, we can see that it does not contain any secrets like the admin CoupConfig had. This file is safe to share with your teammates. If we run Coup CTL using our CoupConfig, we get a browser pop-up to log in, but we forgot to add our back permissions. Let's take a look at what username we are actually authenticating with. We can do that with another Pinniped sub-command, who am I, which tells you everything about your current identity. Let's use that with our admin CoupConfig first, where we can see that our username is Kubernetes admin. If we run it again with our new CoupConfig, we can see my email address from GitLab. Let's add a cluster role-binding to give my new user full access to the cluster. And now we can see that the Git pods command works like we wanted. One thing that you might have noticed is that when I ran that command a second time, I didn't get a browser pop-up. That's because my temporary session credentials are saved in a local cache file. We can take a look at that file and see my GitLab ID token and some other metadata. And if we delete that directory and rerun Coup CTL, then we get the browser pop-up again like you might expect. Next, we're going to go through the exact same process on a second cluster. This time, we're going to choose a managed cluster running in Google GKE. First, we'll grab the GKE admin CoupConfig. We'll set that as our default CoupConfig for now by setting an environment variable. Next, we'll use exactly the same command we used before to install the concierge using CAP. We'll see the installation happening just as before. If we take a close look at the credential issuer on this new GKE cluster, we can see that the concierge is operating on a different mode on this cluster and isn't quite healthy yet. That's because Pinnipet only supports this type of cluster via a special impersonation proxy mode, which takes a moment to initialize. In order to safely use GitLab on this second cluster, you need to create a second OADC client in GitLab. This is because I need the tokens I'm passing to each cluster to have a unique audience claim so that they're not replayable between the clusters. If one of your clusters is compromised by an attacker, you don't want them to be able to capture one of your tokens and use it on a different cluster. We'll go into GitLab and make a second client, just like the one we made for the kind cluster. We'll run a very similar GitCoupConfig command, and now we have a GitLab-based CoupConfig for our GKE cluster. When we use the new CoupConfig with CoupCTL, we get a browser pop-up just as before, and we run into a similar permissions error. We'll add my email as a cluster admin again on this cluster, and now our command succeeds. We set up this second cluster completely independently from the original kind cluster. If I clear my local session cache, we can see that when I run CoupCTL against the kind cluster, I initially need to log in with my browser. And if I run that same command against the GKE cluster, I have to do the browser login a second time. You can imagine that if I had 10 or 100 clusters, all this client setup and all these browser log-ins might become arduous, which is why we also made the peniped supervisor. Next, we're going to set up the other major peniped component, the supervisor. We can install it using another cap deploy command. Now that it's installed, we can see some new pods running in the peniped supervisor namespace, and we can see that there are more new Kubernetes APIs for configuring the supervisor. I've done a bit of setup ahead of time and registered a DNS name, demo.peniped.dev, pointing out a static IP address on this cluster. I've also pre-provisioned a TLS certificate from Let's Encrypt, which we'll use to configure secure ingress. I have a load balancer service that routes inbound HDTPS traffic on our static IP address to the supervisor pod endpoints. We'll apply that service object to the cluster and wait for it to be ready. Now, we should expect that something is listening on that port, but we still get a strange TLS error. To configure the supervisor to listen on our new host, we use the federation domain custom resource. This tells the supervisor to act as an OIDC issuer at the given URL. We set demo.peniped.dev and reference a secret called demo TLS that will contain the TLS certificate and private key. We'll create that secret using the Let's Encrypt certificate I provisioned earlier. Now we can see that we get an HTTP not found error when we curl. If we look instead at the OIDC discover URL, we can see that we have a valid OIDC issuer info. We can also look at the status of our federation domain, which shows that it's ready. Next, we'll configure the supervisor to authenticate users via GitLab. First, we'll need to register a new OIDC client. This client is slightly different from the ones we generated before. It's a confidential client, and the redirect URI is hosted under our new demo URL. This time our client has both a client ID and a client secret. We configure our client in the supervisor using the OIDC Identity Provider custom resource. You can see that this time I've configured the Kubernetes username to come from the GitLab nickname field rather than the email address. Once we apply that object to the cluster, we need to also create the referenced client credential secret. Now we can take a look at the status of our new object and see that it is loaded and ready for logins. Now that our supervisor is running, let's reconfigure our two clusters from before to use it instead of using GitLab directly. We can delete the old JOT authenticated resources we had on each cluster, as well as the cluster role bindings we had from before. Finally, we can delete the first two clients we created in GitLab, making sure not to delete the newest one we made for the supervisor. We configure the clusters to authenticate via the supervisor using the JOT authenticator custom resource. If we take a look at the configurations for our kind and GKE clusters, you'll notice they are very similar. The only difference is that each expects a distinct audience. If you have a large fleet of clusters, it's your responsibility to ensure that they each have a unique name. Now that our new JOT authenticator configurations are applied to the clusters, we can use the peniped git koupconfig command to generate new supervisor-based koupconfig. Notice this time we don't need to specify very many command-lined options. If I run the who-am-i command using one of our new koupconfig, you can see that I get a very similar browser-based login. My authenticated user is now based on my GitLab username, and includes my GitLab group memberships. Let's create a cluster role binding against my peniped testing group on each of our clusters. Now I can easily access both of these clusters with a single browser pop-up. Next, we're going to show how logins work if you don't have a local web browser. This can happen if you're trying to use koupctl from a remote Linux machine, such as an SSH jump host. Here, I'm connected to a Linux host running a bare bones Debian 11 install. I can download the peniped CLI and the kubernetes CLI using curl and install them into the system path. I see that I have peniped 0.10 and koupctl 1.22, so we're ready to go. I'll just copy and paste the koupconfig we had from before into a file on the jump host. And when I use that koupconfig from within the jump host, I get a manual prompt to login on a different machine. You still need a browser to be involved at some point because OADC identity providers generally assume a browser-based flow. When I open the login URL on my host machine, I now get prompted to copy and paste the authorization code into the remote login prompt. After that, I'm authenticated just as before. Finally, we're going to learn how the supervisor supports LDAP-based authentication. Today, our LDAP server will be JumpCloud, where I've created a directory endpoint and some test users and groups. First, let's delete the GitLab configuration we had from before. Instead, we'll add an LDAP identity provider custom resource. This resource describes how to connect and authenticate to the directory, how to search for users and groups, and how to map their LDAP attributes into kubernetes user and group names. I'll apply that object and we'll create the secret with our LDAP bind credentials. Just like other pinniped APIs, I can check the status of the new object to see that it's connected and ready. I'll run pinniped git koupconfig again to get an LDAP-based koupconfig. When I run koupctl with that koupconfig, I now get a username password prompt. We see a familiar permissions error, so let's take a look at my current identity. We can see that I'm now authenticated as user penny in the group seals and mammals. Let's give everyone in the seals group access to our cluster. And now our command succeeds. Pinniped is a community project. If you're interested in getting involved in the project, either as a user, a contributor, or a future maintainer, please reach out. We hang out in the pinniped channel in Kubernetes Slack. We hold a public community call twice a month and we're on Twitter at Project Pinniped. We'd love to hear your use cases, your bug reports, feature requests, or any ideas you have for the project. And of course, you're always welcome to file a GitHub issue or start a discussion. Next, I want to show you some of the work we have planned for the project. Most of these are early stages of planning, so if you have specific ideas about how you think they should work or particular features that are important to you, let us know. This is our roadmap, which you can also find in GitHub. The first two items are well in progress and should land by the end of this month. The first is support for password-based logins to compatible OADC identity providers. This is basically pass-through support for the resource owner password credentials grant. And it lets you, for example, use OADC-based service accounts for service-to-service authentication, such as from my CACD system, if your IDP supports that flow. The next is specific support for Microsoft Active Directory. And Active Directory already works with our generic LDAP support, but we've taken a shot at really streamlining the experience. Because ADE has a lot of consistent defaults, we can give our APIs much better defaults. And we can handle some of the ADE-specific edge cases better than in the generic LDAP case. The next item on our list is support for multiple IDPs in the supervisor. Currently, you're only allowed to have exactly one identity provider configured, which is somewhat limiting. This is a bit of a table stakes feature, but I'm really happy with how we've designed it to fit into our APIs. Next up, we've got wider concierge cluster support. Our goal has always been to support any Kubernetes cluster, but today we fall a bit short of that. We're planning to write some new concierge back-end strategies so that we can support OpenShift clusters. And we're planning to add a new strategy that uses the short-lived certificate support that was just added in Kubernetes 122. And that should be a nice portable option for modern clusters moving forward. Next up, we've been calling identity transforms, which is probably the feature on the list that I'm most excited about. Currently, when you connect to an OIDC or LDAP identity provider, we basically just let you choose which attribute from the IDP maps to the Kubernetes username and which attribute maps to the group names. We'll still support that mode, but we're planning to give you a ton of new customization options by embedding a small scripting engine called Starlark into the supervisor. And this will let you do things like add prefixes to users or groups, do custom filtering, or even make coarse-grained assertions about which users or groups are allowed to use Kubernetes. Next, we have extended IDP support. We already support any OIDC or LDAP provider. But there are a couple of popular providers that either don't work because they're not OIDC or don't work perfectly because they use non-standard features of OIDC. And two examples here are GitHub and Google. GitHub is not an OIDC provider, but it has a similar OAuth-based authentication protocol. And Google works today, but they have a custom groups API and some tricky edge cases related to their hosted domain claim. As you can see, there are a bunch more items after that. We try to stay flexible and agile on how we prioritize features. So once again, if you have thoughts about anything you see here or you think we're missing something that would make Penipet a perfect fit for your use case, let us know. Thank you all so much for attending. I also want to thank my teammate Margot for co-presenting and thanks to the rest of the team for helping build this awesome pool. Next, we're going to be around a bit for questions. Hello, everyone. It's me in person again, and Margot as well. I will answer questions from the chat. Please keep dropping them and we'll give folks some time to think of questions. Answered one question about Keycloak already in the chat. The answer there is we haven't tested directly with Keycloak. We do test. We have a really extensive CICD system that we use to test Penipet upstream. We test integrations against DEX and against Okta as two standards compliant OADC providers. We don't test Keycloak, but it is also an OADC provider. As far as I know, it follows the spec pretty well. I think it should work just fine. If anybody is interested in getting more official support for that, we'd be happy to try and work on getting that into our test grid. The sticking points with sort of more adventurous IDPs as you sort of get to maybe things that we haven't tested tends to be that the basic login flow works just fine, but you may run into problems, for example, getting groups to flow incorrectly or even some of the edge cases with groups like what happens if somebody is in 10,000 groups? How do you handle that case? The next question in the chat is about comparing what we built to DEX and there is certainly some overlap. We're both sort of interacting with the same technology space here, OADC. DEX has a little bit larger goal and a little bit different goal. So DEX wants to be a generic OADC gateway that connects together all kinds of different identity protocols, including upstream OADC, but also upstream things like SAML and LDAP. And then downstream on the sort of the client side, it wants to use that as a generic OADC platform. And that's really cool. We love DEX. We use DEX. Where DEX is different from PenaPad is that we have focused more on the Kubernetes integration side. So a couple of aspects of that. One is all of our configuration APIs and our installation process is all meant to be driven via Kubernetes API. So if you have, again, like a GitOps pipeline or some sort of declarative system that you're using to manage your Kubernetes configuration, your PenaPad config is just another set of YAML objects of Kubernetes objects. And then as well, we focused on the client side of the experience, the CLI and kind of the login flow that you have locally, and actually the integration with KubeCTL and all of this. And with DEX, you don't get that with DEX itself. There are some tools that surround DEX and you can build a workflow that's similar to what we've built. Like one of those tools is Gangway. But again, we focused on trying to provide out of the box experience that just works and it is sort of opinionated. It doesn't do everything you might want it to do in every possible scenario. I think DEX is probably a more generic tool that you can build all kinds of interesting things with. PenaPad is really focused on that Kubernetes integration. Other questions. Put another shout out there. If anybody has questions that you don't think of now, you do think of later. I just have to link to the PenaPad channel in the Kubernetes community Slack. That's where the team hangs out. And we're happy to just chat about anything. Two more questions. First question. If the OIDC provider is running in the same cluster, can we have flexibility in the CRD to have non-HTPS URL support? That's a good question. This is one of those things that we've taken somewhat of a hard line on right now, is that we only support secure configurations of things. So that means HTTPS everywhere. For LDAP, it means we don't support any insecure LDAP connection formats. If you have a compelling use case, I think definitely file an issue or stop by Slack and we can chat about it. In a local cluster scenario like that, it might be totally reasonable to assume that the IP network is secure and you don't need TLS. We just wanted to make sure that all the code we ship has safe defaults and is hard to misuse in a way that you don't realize is insecure. Next question. Would you use it in production at this stage? Yes. We ship Pinniped as an underlying component in several commercial VMware products. It's not the star of the show. You probably would never notice that it's there, but it's behind the scenes powering features on our products. So Tanzu Kubernetes Grid, Tanzu Mission Control, it's there. Various pieces of it are there and working. We trusted enough to rely on that and I think it's ready for production use. We're also very serious about quality of the software. So we have very good unit test coverage. We have an excellent integration test suite and we have a, like I said before, a large test grid of Kubernetes versions, different IDPs kind of running through all of our tests on every commit. Okay. I think I got all the questions there. Yeah, Anwar mentions for the non-H2DPS use case that it's got a service mesh that's providing encryption. That makes sense. Margot, I didn't mean to steal the show from you too if you have any things to add. Okay. That seems like a good, that seems like a good natural stopping point. Margot and I will be available in Slack immediately following this and also in the future. So thanks everyone. Awesome. Thank you both so much for your presentation today. Thank you everyone for joining us and we will see you all on Slack. And this will be up later today. So thanks so much everybody. We'll see you next time.