 Greetings and welcome to our talk on the secret store CSI driver keeping secrets secret. I'm Aneesh Ramshakar, a software engineer at Microsoft in Seattle. I work on the security projects in the container upstream team. I'm one of the maintainers of the secret store CSI driver and the Azure keyboard provider for secret store CSI driver. And I'm Tommy Murphy. I'm a software engineer at Google in New York City. And I'm working on the secret manager products and I'm also one of the maintainers of the secret store CSI driver and the Google specific provider. So why are we talking about external secret storage when Kubernetes has a perfectly good one built in. First, the Kubernetes secrets may not meet your data or encryption requirements, although there are KMS providers to enable data encryption. But depending on your organization, you may have already standardized on a third party secret solution, and you're looking to use that. Your external secret solution may have some sort of secret rotation story or integrations that you're looking to leverage. And finally, if your organization has already invested in auditing and alerting on a third party secret system, you may not want to duplicate that effort for your Kubernetes secrets. So what do you do. There are a few options to consume external secrets. First, you might look into modifying your application to fetch the external secrets from the external API directly using the SDK SDK provided by your provider. This may not be possible, though, depending on your deployment. You may not have the code to edit or it may be prohibitively expensive to implement these changes. And if you're targeting deployments against multiple secret providers, this effort would need to be duplicated for each secret system. You may look at just copying the secrets to Kubernetes secrets, perhaps with a controller. This is portable, but won't and won't require application changes. But you may lose out on some of the benefits of your external secret store, like data at rest encryption, and the identity that's used to access the secrets may go from the workload to the controller that is duplicating the secrets. You could also use a sidecar to fetch and write secrets. The sidecar may be injected using a mutating web hook here to the pot identity would be used to fetch the secrets. But the sidecar and the web hook may add additional operational complexity that you're not prepared for. Finally, there's the secrets store CSI driver, which uses the container storage interface specification that we're going to talk about today. And we'll cover some of the features that we think make it a good fit for consuming external secrets on Kubernetes. Originally developed at Deus Labs, this storage driver is now being built and maintained as a sig off sub project. The driver allows Kubernetes to mount multiple secrets keys and search stored in enterprise grade external secret stores into pods as a volume. Once the volume is attached, the data in it is mounted into the containers temporary file system. With storage driver, it provides a familiar file system mount experience to your compute workloads. It's also pluggable and supports multiple external secret providers without modifying your application or changing the pod YAML. It can load new values of secrets throughout your pods lifecycle. You can also even sync those secrets to Kubernetes secrets for further compatibility with existing deployments. Finally, it supports both Linux and Windows. You can use this driver if your cluster is on 1.16 or greater, or if you have Windows nodes, you'll need to be on 1.18 or greater. The current supported providers are Azure T1, Google Secret Manager, and Hashicot Quart. And in the future, we're looking forward to an AWS Secret Manager provider. So how does the secret store CSI driver work? The driver is installed as a daemon set onto each node of the cluster. Additionally, there needs to be a provider specific daemon set deployed alongside the driver. When a pod is created through the Kubernetes API, it's scheduled onto a node. The kubelet process on the node looks at the pod spec and sees there's a volume mount request. The kubelet issues an RPC to the CSI driver to mount the volume. The CSI driver creates and mounts the tempfs into the pod. The CSI driver then issues a request to the provider. The provider talks to the external secret store to fetch the secrets and write them to the pod volume as files. At this point, the volume is successfully mounted and the pod starts running. So let's dig into some YAML. Here's an example pod. This pod mounts a volume at far slash secrets. This volume is a CSI volume instead of a secrets volume. The driver name in the CSI volume tells the kubelet to use the secrets store driver for this volume. The volume also references a secret provider class app secrets. So what is a secret provider class? The secret provider class is a namespace Kubernetes custom resource that is used to provide the driver configurations and provider specific parameters to the secret CSI driver. We've looked at the pod YAML and the secret provider class YAML, but how do you know which versions of secrets are being used by your pod? The secret provider class pod status is a namespace Kubernetes custom resource that is created by the CSI driver to track the binding between a pod and a secret provider class. This resource contains details about the current object versions that have been loaded into the pod mount. Let's look at the demo now. First, I'm going to show you what is there on the cluster. So we have a kind cluster with a single node that's running Kubernetes version 1.20.2. To show how the secret store CSI driver enables pod portability across different external secret store providers, we're going to start with deploying two applications written against specific third party secrets. And here we're going to be using Azure and GCP. First, we're going to see what the pod YAML looks like for these applications using the external secret store API to access the secret. This is a sample YAML for a pod using the Azure Key Vault API to get the secret from Azure Key Vault. As you can see here, the Key Vault name and the Key Vault secret name are provided as arguments for the container. And the credentials are obtained from secret store creds, the Kubernetes secret that I've reconfigured. This is a YAML file for a pod using the Google Secret Manager API to get the secrets from Google Secret Manager. The secret name here is app secret and the version that's being fetched is latest. Now let's go ahead and deploy these pods in their respective namespaces. So now we've deployed the pods. Let's check if the pods are running. Now that the pods are running, we can query the logs for the pods to see the secret that's being used by the pod. So as we can see here, the Azure Key Vault pod is logging the secret from Azure Key Vault and the Google Secret Manager pod is logging the secret from the Google Secret Manager. Now, instead of having different application implementation for external secret store, we have this application that was returned to consume the secret from the file system instead. Using the secret store CSI driver, we're going to show how this application can get secrets from either secret bracket. Before we jump in, if you look at the code here, we can see that this pod is trying to get the secret from the file system and log the secret. As a first step for the demo, I'm going to deploy the secret store CSI driver using Helm in the kubestem namespace. It's recommended to use a separate namespace for the CSI driver pods other than the ones that are used for the workload. Now let's take a look at what the Helm chart deployed. We have a CSI driver daemon set here. The CSI driver pods need to be run on all the nodes so that the kubelet process on the node can talk to the driver to mount the volume. Now that the driver has been installed, let's go ahead and deploy the providers in the kubestem namespace. So first, we're going to be deploying the Azure Key Vault provider. And next, we're going to be deploying the Google GCP plugin. Let's check to make sure that the provider and the driver pods are running. Now that the CSI driver and the provider pods have been installed and running, we are going to take a look at the pod YAML that's being used for the application. So as you can see here, the pod specifies the volume mount, the name for the volume mount, the secret store inline, and the mount path within the container is going to be mount secret store. And in the volumes, the same name is being used and here the volume type is CSI and we can see the driver that's being used is the secret store CSI driver. And here in the volume attributes, we have the secret provider class CSISPC that we will take a look at next. So when we take a look at the secret provider class, here we can see the provider that's being used is Azure. And in the parameters, we can see the keyword name is KubeCon EU 2021. And the objects is an array where you can define multiple different objects. And the object type here is secret and the name is app secret. The possible values for object type are secret key and search. Now let's also take a look at the secret provider class YAML for the GCP plugin. So we can see here that the provider is GCP and for the parameters, we are trying to get secrets. So the secret name is app secret and the version that's being fetched is latest. So next I'm going to apply the YAML to create the secret provider class in the Azure and the GCP namespaces. Now that we've configured the secret provider class, now the next thing is to deploy the same pod YAML that we looked at in both the Azure and the GCP namespace. So when these pods get scheduled to a node, the Kubelet process on the node will see the volume definition in the pod spec and invoke the CSI driver to mount the volume. The CSI driver will mount the temp address and make an RPC call to the provider to fetch and write the secrets to the file system. Let's check to see if the pods are running. Now that the pods are running, now let's look at the logs. The pods are running. Let's first check the pod mount to see if the file exists. So as we can see here in the volume mount path defined, we can see the app secret file and let's try to log the file content. And as we can see here, we can see this secret is from Azure Key Vault. Now let's check the logs for the pod to see what the content is. So this is for the CSI pod in Azure namespace, which prints out how to look from Azure Key Vault. And let's look at the logs for the CSI pod in GCP namespace. And we can see that it was able to fetch the secret from Google Secret Manager. And now, as you can see, the same application is working against either Secret Store. Now that you've seen the driver in action, let's dig into some more advanced features. The CSI driver provides an optional feature to sync the mounted contents from the pod as a Kubernetes secret. A common usage of this feature is to store the TLS certificates in an external Secret Store and have the driver sync it to a Kubernetes TLS secret for use by an Ingress controller. In addition, the synced Kubernetes secret can also be referenced in pod spec to set the secret as an environment variable. Use the optional secret objects field in the secret provider class to define the desired state of the synced Kubernetes secret object. And it's generally accepted best practice to periodically rotate secrets. If your external Secret Store has an automatic rotation feature, you may be interested in how your workload can get the new values of the secret whenever it changes. The driver supports automatic rotation by periodically reissuing RPC calls to the provider to refresh the contents of the mount. The driver will emit a Kubernetes event and re-sync the new values of any synced Kubernetes secrets. Now let's take a look. Now let's do another demo for the sync secret feature. First, we're going to see how the Secret Store CSI driver can sync the mounted content as Kubernetes secret. We're going to enable application to work with NGINX Ingress Controller with TLS certificates stored in Key Vault. So for the demo, I've already created a certificate for a common name localhost using the step CLI. So let's inspect the certificate using OpenSSL. As we can see here, the issuer, the common name here is localhost and in terms of the validity, the certificate is valid until April 6th. So now I'm going to upload the certificate to Azure Key Vault. Now that the certificate has been uploaded, let's go to the next step, which is to deploy the Ingress Controller on the cluster. This installs the Ingress Controller and all the other required manifest. Now let's take a look at a sample secret provider class that is going to be used for syncing as Kubernetes secret. So as you can see here, the name is Azure TLS. So the provider that's being used is Azure and the parameters field here matches what we saw previously in the previous demo. So the object that's being fetched is the secret that we just uploaded to Azure Key Vault. The secret objects field in the secret provider class is what tells the CSI driver that the mounted content also needs to be mirrored as Kubernetes secret. Here the secret name is what is the desired name. The type is what the secret type needs to be. And as we can see, the TLS key and TLS cert are mandatory fields for Kubernetes secret of type TLS. So now we're going to go ahead and deploy the secret provider class in the cluster. Now let's take a look at the sample pod service and Ingress definitions for the demo. So here we have a static pod called foo app and the volume mount in mount secret store and the volumes being referenced here is for the CSI driver. The secret provider class here Azure TLS matches the name for the secret provider class we saw in the previous step. And also as part of this manifest, we are creating service and also another bar backing. And when we look at the Ingress specification, we can see that we're using TLS for local host and the secret name here Ingress TLS CSI matches the secret name that's defined in the secret provider class. So let's go ahead and deploy all the required pod spec. Now let's check to ensure that the pods are running. So now when we try to curl the endpoint, we can see the certificate that is being used is the one synced by the secret store CSI driver as Kubernetes secret with an expiry date of April 6. So let's verify that. So when we look at the output here, we can see that the subject matches the certificate that we uploaded and also the expiry date here is April 6, which matches the certificate that we uploaded. As we talked about the secret provider class in the presentation before it is used to show the current version of the object that's being used by the part. So now let's take the secret provider class for this particular part to see what version is being used. So as we can see here, the status is mounted, and the objects matches the objects that were defined in the secret provider class. And the version that's being used for the pod currently is starts with D7F4. And there is also other additional metadata that can help for cluster operators like the pod name and also the mapping with the secret provider class name. Now that we've seen how sync Kubernetes secret works, let's also check out how the rotating the secret would work. So for the purpose of the demo, I've already generated another certificate with April 14 as the expiry date so we can see the changes. So let's inspect the certificate before we upload it to Azure keyword. So as we can see here, the subject is local host and the validity for this is April 14. So now I'm going to upload the certificate in Key Vault with this new cert. The auto rotation feature in the secret store CSI driver should enable us to fetch the new rotated certificate from Azure Key Vault and update the mount and the sync Kubernetes secret. So now, when we try to call the same endpoint, we expect to see the certificate written to have an expiry of April 14 instead of April 6. As we can see here, the expiry date now is April 14, which is indicated that the certificate that was rotated in the external secret store has successfully been updated in the Kubernetes secret on the cluster. Now let's also check the secret provider class port status again to see what version of the secret is currently being used. Now, as we can see here, the version before started with the and now we can see that the version here has been updated to the newly updated secret. So, as part of the demo, we will also try to rotate the secrets that were used by the application as part of the portability demo to see if it can pick up the latest value. Before we rotate the secret in Key Vault or Google Secret Manager, let's start telling the logs to see the changes happen. So now I am telling the logs from the CSI port in Azure namespace and also the CSI port in GCP namespace. The CSI port implements a file watcher which detects any changes in the file system and automatically logs the new secret. So now I'm going to go ahead and rotate the secret in Key Vault to the new value. As you can see, the CSI port in Azure namespace now has the new value. So now let's go ahead and rotate the value in the Google Secret Manager as well. Okay, just did that. And there you go. We can see that the secrets to a CSI driver has updated the content in the mouth for the pods and they were able to pick up the latest changes using the file watcher. So what does the future look like? We are working towards a stable release for the driver. This includes increasing the test coverage and also finalizing the driver provider interface. The CNCF is commissioning a third party security review of the project and we're looking forward to more community involvement. If you'd like to get involved, you can join the CSI secret store channel in the Kubernetes Slack. We also have a mailing list for notifications of new releases and security announcements. We use GitHub issues to track bugs, feature requests, or to answer questions asynchronously. And also finally, we hold a bi-weekly community meeting. Here are some of the resources from this presentation that you may like. There's the links to documentation for the CSI driver, links to documentation for the individual providers, and a link to the examples from this presentation. Thank you for watching and remember to keep the secrets secret.