 Hi, everybody. Thanks for joining my talk. It's great to be able to talk to you about the CertManager project. This year, it's virtual, but hopefully, in the not too distant future, we'll all be able to meet up again soon and meet in person. I'm going to be talking to you today about all the various ways in which you can use CertManager beyond Ingress. I'm Matt Bates. I'm from Jetstack, with the original creators of the project and together with now 280 contributors in the community, we've got CertManager to where it is today. CertManager is effectively a Kubernetes add-on or extension, but it extends the Kubernetes API and adds support for certificates and certificate authorities. It means that you can manage X599 TLS certificates. That's the complete lifecycle, the issuance, and renewal, and use those certificates in your applications. It has wide adoption and lots of contribution from the community. And as of last year, we donated the project to the CNCR. So it's now part of the sandbox. It's got integration with a variety of different public and private PKI providers, both in the core of the project but also in so-called external issuance. And we're going to look at that in the talk today. Briefly, how does CertManager work? As I said, CertManager is an extension or an add-on. It provides a number of additional resources, so-called custom resources, CRDs, and provides them on top of the Kubernetes API. These represent certificates and certificate authorities. We can refer to that as an issuer. This provides first-class support for those in the Kubernetes API, and you get all the advantages of being able to manage those resources declaratively, much like you manage pods and deployments. CertManager has a controller that manages the lifecycle of those resources and, importantly, is able to provide the automation of those certificates, automating the fetching of those certificates and, importantly, the renewal of those as well. CertManager is most commonly used to secure ingresses. And you can see here it's really quick and easy to add a TLS certificate to an ingress, simple and boot-by-adding. In this case, the annotation, which references where you want the certificate to be obtained from. So in this case, it references the Let's Encrypt staging issue that I have in my cluster, and it also requires you to specify the secret name there as well. So the secret will store the certificate that's obtained from Let's Encrypt in my ingress. So really, really simple. How does this work under the hood? Well, CertManager watches for ingresses using its ingress shim component as part of the CertManager controller. And it's then able to create a certificate, which is backed by a point-in-time certificate request which encodes this certificate signing request, which it then submits to your CA of choice. So in my particular case, it was the Let's Encrypt staging endpoint. And that's the point in which CertManager is able to automate the ACME flow. There's additional resources that you can see there, ordering the challenge, but effectively all of that is automated for you. And it results in a signed certificate from this certificate authority, which is placed into a secret and then consumed by your application. The case of an ingress, of course that's most likely to be an ingress controller, something like NGINX for instance, and it's then able to serve your application securely. You can also use these resources more directly. So the ingress shim automatically provides that automation just based on some annotations that you add to the resource. But you can indeed tap into the resources that CertManager provides, that is that of the certificate and the certificate request. So the certificate effectively is more human readable if like representation of the certificate. And that's the one that you can see and use the API for indeed use the shell. This difficult request is like a point in time request that's made for an X509 certificate from an issuer. And this is typically consumed by as I said here, by managed go by controllers or other systems. You see the CertManager controller there is responsible for reconcilating these certificate and certificate request resources. But of course, there are lots of other certificates to managing Kubernetes, it's not just the ingress certificates, there are many certificates across a cluster typically that you may wish to automate using CertManager. And I've just highlighted some of these and we're gonna talk about a number of these different use cases in this talk. Ingress, as I mentioned, pod to pod, which might be with or indeed without a surface mesh and we're gonna look at both in this talk today. You may also wish to secure a cluster and the control plane and the data plane. There are also a number of web hooks that you may have that are used for admission that you wish to secure. We'll look at how CertManager can be used in almost all of these cases. We often get asked in the community how you can use CertManager in order to secure pod to pod communications effectively that kind of East West traffic in a cluster without a surface mesh at this stage. And this is particularly useful if actually what you've got is a really small handful of microservices and you can really just use, you need certificates, you need those to be automated and you wanna be able to consume those in your application and this would be preferable to the operational complexity of having that kind of full mesh. Here's an example of actually how you can use a CertManager certificate resource directly. So you can go and create this in source control of course and apply it directly via CI CD to your cluster and CertManager will automate the issuance of this certificate and put it into a secret that you can then consume in an application in a pod. And so a couple of things to point out here making this particular example is gonna be very short live certificate, we're gonna put it into a secret that I can specify much like we did with the ingress and it can then consume that secret in my application. It really is a set of files in a volume mount that I can then sort of put into my application and start serving TLS. We've got some DNS stands here and I'm also making reference to my CI. It's just gonna be like a private CI so this would obviously not be a public CI if you just really want something local to the cluster. And we're gonna see how you can sort of set this up in some slides to come. Here's a bit more YAML that shows how you can effectively take the secret and then make it available in a volume and a volume mount and the files will be available to this ping service or this ping pong service at the mount path Etsy SSL private CI, there's the key pairs and the CA certificate as well. There's a really simple way of being able to use the underlying if you like resources and sort of measure programmatically. We have a number of end users that do exactly this at scale, you may also want to have certificates obtained from a private CA. So rather than just a public CA such as let's encrypt if you're trying to get certificates for that use case where you're trying to secure pods that are communicating within the cluster you may wish to use a private CA for that purpose. And there's actually two issues that have built into set manager that are really useful and convenient for this purpose. They're the self signed issuer or so-called self signed issuer and the CA issuer. And you can see here what I've taken it's a number of snippets of YAML to show you how you can combine these resources in order to create what effectively is a self signed CA. There on the right hand side, you can see there is a certificate called my CA have used the is CA flag to denote that this is going to be a CA certificate. It's got a 90 day duration specified the secret the common names and subjects. There's a number of other properties that you can specify and an issuer ref in this case, I'm referring to that sort of self signed a key that I generated that on the left hand side. This is just sort of an out the box relatively simple if you're like private CA that you can configure using set manager and it's a little known people don't know that you can do this. You can also of course plug in if you're like a more robust and more secure private PKI if you wish. And so we've got a number of options here both built into the core of the project but also those that are external to the project as well so called external issuers. So you can use VOL, you can use small step they've got an ACME and ACME CA server. You can also use the cloud providers. So we've got support for Google and certificate authority service. And there's also an external issuer for AWS is PCA. And if you're in a more of an enterprise environment that you can also integrate it with Venify and there's an issuer for TPP. So in that previous example, we were looking at how you could use the certificate resource in set manager in order to obtain a serving certificate but how about if you wanted to use that manager to do inter pod, pod to pod mutual TLS and obtain both a serving certificate but also a client certificate. So I've got an example here of a really simple example of a ping service and a pong service. And we open source this lab. If you wanna go and find it if you follow the link down the slide it sort of hooks up this ping pong service simple go binary that you can also use in some of your own examples. In this example, we have is the two services as I said ping contact pong verifies itself for the client. Pong replies that secures using the servicer. And yeah, it actually replies back to the browser. So you can see the contents of the pong certificate. And importantly here both ping and pong they're only gonna trust certificates that are signed by the same CA. So how does this look? How does this look in YAML? Well, we have as before we have the we use this certificate resource and we've now got ones you've got to basically two certificates but both the ping and the pong and you can see here in the certificate resource we're able to specify the key usages that get passed down when creating the certificate. And so, you know, you can sort of use here by client or server off and set those appropriately. And both of the certificates here both are both ping and pong use the same issuer to the point out that private CA that I created on the previous slide or showed on the previous slide. So using the certificate resource in this way you can configure it to create certificates in order to do things like for instance authentication to a database. And so we've got this blog post the one of our team put together in which we used, we talked through how you can use cert manager to create certificates for client authentication with MySQL. And there's been lots of examples that we've also seen the community of where users have used it exactly for this type of purpose. Rather than manually having to create and manage those certificate resources for applications in the set manager project we have developed a CSI driver in order to make it even more seamless. The neat thing about the CSI driver is that it means that private keys can remain node local. It's rather than being put into a Kubernetes secret as is the case with some of the integrations that we've already seen. In this particular case you can keep the private key to the node where it's generated by the CSI driver. It means you can provide unique key and certificates for each of your applications. So if you're using a replica set as part of a deployment, you've got many pods each of those pods can have its own unique identity and that identity that X509 identity can be obtained at the point of application runtime. It also means that these certificates are renewed much like the other use cases that manager will know when to renew the certificate and provide a means to do that for you. So this just makes it really super simple in order to get those certificates to each of your applications. How does the CSI driver? Well, the CSI driver resides on each of the nodes in the cluster is deployed as a daemon set and just stepping through how it works. First of all, the pod is scheduled to a node. The Kubler of course is responsible for coordinating the runtime it works with the node driver, the CSI driver by calling the node published volume. At that point, the driver kicks in and it generates private key and it generates a certificate request. That's a certain manager certificate request which they effectively encodes ACSR. That certificate request is submitted to the API server on your behalf. The CR is then reconciled and a certificate is obtained from your kind of CA of choice. And that certificate is then to give it the private key as written to the nodes file system and then that's bind mounted into each of the pods, obviously into each of the relevant pods. The driver will keep track of all of the certificates and it will also be responsible for in yours. And if at any time or if and when of course the pod is terminated then it will also send out a call to the driver. The Kubler will send out a call to the driver in order to make sure that the certificate and the key are deleted. So this is a really seamless, automated way of getting those identities. You can also use set managers to secure Kubernetes webhooks. The webhooks are used for a sort of dynamic admission control used for a variety of purpose for instance, mutating resources or validating resources as they are admitted to the API server. And typically used for instance for applying default values or enforcing policy using the likes of OPA, Converno and also for ensuring that the resources that are submitted are semantically valid. And in fact, set manager uses this itself. And you'll notice there's a component called the set manager webhook that does exactly this. Now, if you're doing using this as an extension point that you want to be sure that when you are submitting these resources from the API server to those webhooks that it's secure and then also you can be sure of the destination to make sure that nothing in the various is happening with it being mutated or validated in a way that you would not expect. So you can use set manager in order to secure those endpoints and secure those, provide the means, provide the certificates for those endpoints. And there's a couple of annotations or a few annotations shall I say and where you can do this. One is inject CA from set manager IO, inject CA from you can reference a certificate. And you can also inject it from a secret and you can also inject the API server CA certificate as well. This uses a component in set manager called the CA injector responsible for watching in on these annotations and then providing the means to be able to automate it for you. One example of where this is used is in cluster API. So this is an example of a command that I ran recently when spinning up a local cluster API cluster. You can use cluster CTL as you can see. And one of the very first things that it does is it fetches the various different providers and then it installs set manager. And that might look a little curious if you, but if you dig a little bit deeper once the cluster API is up and working and you want to keep CTL get certificates. And specifically you look in the KAPI web system namespace. You will see there are a number of certificates that are used as serving certificates for those webhooks. Service message becoming increasingly popular and a number of users in the community are asking how they can integrate set manager and also it all its various different CA integrations into the mesh. So that those work loads that control plane sets and data plane sets are provided from their sort of provider of choice. Now, if you think about service mesh provides it's a really, really rich capability to have consistent observability, security and so various reliability features that are built into the platform. So rather than developers having to build this themselves it's transparent and it's provided for them. This can be highly convenient. Rather than services to contacting each other directly I've got an example here of service A and service B these microservices or services communicate via a proxy and the proxy can be dynamically programmed based on resources that reside in the control plane. And so what's great about this is that you can provide all of that capability in the proxy rather than having to be implemented in the application itself. And quite often most of these meshes most of which are based of course on Envoy have the ability to automatically provide MTLS between proxies in the mesh. So you have to be transparent MTLS between all of your services which is highly convenient. A number of service meshes out there in the open source have integrations with set manager. So Linkedee of course a service mesh that's in the CNCF has the ability to integrate set manager for some time. You can provide it a trust anchor and an issuer certificate and private key and these can all be automated using set manager. And thereafter Linkedee has a component built into it responsible for effectively automating the provisioning of the leaf certificates and the renewal of those leaf certificates with relatively short lifetimes. So you get a full automation of that both the bootstrapped certificates that are used, the trust anchor and also the leaf certificates as well. So it's a really good combination putting the two together. Istio has had the ability to plug in custom CA certificates into its Citadel for some time. And you can reference a CA key pair using a Kubernetes secret. But we've been working with Istio folks in order to more fully integrate set manager so that you can actually have the workload certificates obtained from set manager itself. Of course that's advantageous because you can then start to take advantage of all the different issuers that set manager has in its community. And what this involves is effectively replacing the Citadel that's that registration authority with set manager. We can create the certificate requests and have those fulfilled set by those different providers. There's also some integration which is much more experimental. It's been more recently released that enables Istio to use the Kubernetes certificates API. And we're working with them on that integration as well. And finally, the third example I have here is the Open Service Mesh Project OSM that we've been working with to plug in set manager and they've had plugable certificate management from the get go. And they've got their own built-in component that you support for also vault and also Azure's key vault. But yeah, if you want to use set manager there is an integration there and enables you to plug it in and that component will be responsible for creating and managing certificate requests for the workloads that run within the mesh. So variety of different integrations that enable you to plug in to set manager and in turn, of course, all of its integrations that it has with different issuers, be it those that built into the core of the project but also those that are external to the project as well. Set manager can also be used to secure the Kubernetes control plane and the nodes that form a cluster. For now, for anyone that's tried setting up PKI and Kubernetes, they'll know that there are a lot of certs, client certs, service certs of all different flavors and it can get quite complex trying to manage this. I know from doing this in the very early days of provisioning a Kubernetes cluster, well, now things have changed. We can use the magic that is QBDM or QVADM and it's here to the rescue really. It provides auto-generated, auto-renewed control plane certs. By default, this uses like a self-signed certificate. So if you're happy to accept that it's not rooted if you like in your existing chain of trust, then you can really just use this as is. But if you're in an enterprise where you need those certificates to be rooted in some kind of existing PKI system, then there are a couple of ways of doing this. At least one way is using the external CA mode. This enables QBDM basically to generate the private keys and then generate the CSRs and then that you fulfill effectively the certificate signing requests yourself. And so we've built a jet stack integration between QBDM and Venify just to demonstrate that you can use this. And since we actually did this, this command certs generate CSR has actually begun GA. So how about provisioning of certificates for the Kubernetes nodes? On the previous slide, we saw how you can use QBDM to provision certificates for the control plane and how you can customize that and use an external CA provided certificates. But how about the nodes themselves? Well, QBDM uses the Kubernetes certificates API and that's actually been built into Kubernetes for some time. It's only recently, I think 118, that kind of GA, so there's now like a V1 of that API. So through a process of bootstrap, the keeper that makes a certificate signing request, that's this resource here to the API server. And that certificate signing request, as you can see here on the side, has a sign and name. And that's now required field as of V1. And those signers are actually built into the Kube controller managers, that one there, the Kube API server, client, Kubler, signer that's built into the controller manager. And that will automatically basically approve the signing of those certificates. But you can also configure it in fact to sort of have like a manual approval that you may want to provide using something like Kube CTL So having a sign and name means that you can plug in also other signing mechanisms in that, you know, and so in the certain manager project, we've actually worked on some experimental signers. You can see a link to a couple of them there. One is the signer CA. So it's effectively a local CA and also an integration with Vennify using its kind of open source visa library. In the next release of certain manager, actually going to be adding support or the next release is, should I say, you're going to be adding support for the committee certificates API. So that means you're actually going to use all the various certain manager issuers in order to sign a certificate signing request just like this. And that will open up, you know, open the project up to being able to support, you know, kind of many more use cases. So we'll support the certificate request that's built into the project. And certificate signing request, that's in the core of the project, core of the open source Kubernetes project. And over time, we'll look to sort of more fully embrace that core type in the project. So to summarize, there's lots of ways in which you can use set manager beyond ingress. There are many use cases that stretch far and wide really across the ecosystem. You can use it for lots of different types of workloads, including even the cluster itself. So you can use the certificate and certificate request resources directly, programmatically. You can use the CSI driver if you want to be able to have pretty seamless pod identities created and have those renewed. And how importantly have the privately remain node local. You can also integrate set manager with a variety of different service measures. We've got integration today with LinkedIn, Istio and Open Service Mesh and more to come. The web hooks and also the control plan and node certificates as well. As said, in 1.3 or as of 1.3, we'll be supporting the Kubernetes certificates API. And we're looking for many more use cases. So if you have some ideas or you want to become getting involved, please come and join us. Set manager dev, set manager Slack channels, come and join us. I'm always really willing to hear of new ideas, feedback and also have contributors come join us. Help build it. So thanks to everyone. We're going to be available here now for some live Q&A after this session. It's been a pleasure being able to tell you a bit more about the projects and the various different use cases that we're building. And just for your questions, stay safe for everyone. Thanks. Bye-bye.