 Hello, and welcome to Admission Control. We have a problem. I'm Ryan Jarvanen from Red Hat. You can find these slides at this bit.ly.k20ac. And here's the schedule link. You can find me online as most places as Ryan Jay. I'm a developer advocate on Red Hat's OpenShift team. And I would like to give you one admission right off the bat. I have somewhat of a particular focus or motive in my approach to this talk. I am particularly interested in making sure that I can have a sustainable productivity when I'm developing using Kubernetes. Kubernetes is great, but usually I want to make sure I can also accomplish my day job. And so I want to make sure I give you a good sense of the balance between learning all about the advanced features of Kubernetes and being realistic about what you may actually be able to take advantage of. So hopefully this talk fulfills all of those goals. So like I said, I'm looking for a reliable platform that allows me to focus on my day job. Goals for this talk, I want to make sure you understand the primary role of admission controllers in Kubernetes and understand typical use cases for admission control and also when and how to avoid this topic. So first up on the agenda, I broke this into three parts. We've got admission control basics. Next up, dynamic admission control. And then, like I said, when and how to avoid this topic. So first, admission control basics. Admission controllers play a critical role in securing the Kubernetes API and control plane. This diagram is a little bit outdated. I think I have a, of course, totally fake. Let me get you a hopefully a more accurate API pipeline model that fits a little bit closer to realistic what you might experience in Kubernetes given a recent version. Admission controllers are going to, basically you'll see new requests come in, through authentication and authorization. They'll hit the admission control and first hit the admission control plugins. If you want to see the upstream docs, I definitely encourage you to take a look. Oh, I see a question from the audit. This is actually a list of questions I got, questions and answers I got from reading this doc. So is this why my operators and CRDs are failing to work correctly? Quite possibly, actually. Differences in admission controller setup are one of the major, most common reasons why an operator may fail to function correctly on a cluster. How might I enable or disable an admission control plugin? Well, it's pretty simple, actually. You just use the enable admission plugin flag when you're starting up Kube API server as one does since we've all done Kubernetes the hard way. I'm sure we're all manually starting Kube API server. How do I find out which admission controllers are enabled currently on my cluster? Another easy one, you can run kube API server dash H, grep for admission plugins and you should be able to see that's assuming you know how to find kube API server in your path and you have root access on one of the control plane nodes, right? So as much as this is theoretically easy, standard front-end developers might not have access to all of these pieces on their own. If you're using Minikube, for example, you could type in commands like this and find out the list on your setup. I ran this with Minikube, I think on a 118.2 cluster. I got the same result on a 192 cluster with Minikube, which I think is set up via kube ADM. So I did a Minikube SSH to get access to the control plane and then ran PSA UX, grep for API and then passed it through this set expression here in order to parse out just the list of admission control plugins. So this worked, it looked like this was maybe started inside a docker image so I'm not sure how easy it is to actually just call kube API server from the command line but this got me kind of what I was looking for. You can run the same type of thing against an OpenShift cluster if you have administrative credentials. You can actually get some of this information using the API which is kind of interesting that they have an API configuration available on the API but that's kind of an interesting OpenShift approach. So you could run kube CTL get API servers slash cluster dash O YAML, grep for admission and you should see something about the plugin configuration in the response. Here's a link to the dock up here on the same admission controller page just a little bit further down. There's a nice index on what each admission controller plugin, the standard plugins, what they offer. So two that I'd like to point out that are here on the page and let me enhance, zoom in a bit. Always pull images is a mutating admission control plugin and so this is actually going to rewrite some of the request when you create a new pod and enforce that the image pull policy is set to always. This is kind of a security feature. Another one that's been deprecated as of 113 is the always deny plugin. This is I think more of a validating plugin and this is going to look at every single right request and reject every right so you could still do reads to a cluster but this basically puts the cluster in a kind of read-only state essentially if you were to enable this always deny plugin. So how might that look if we were to look at this API pipeline? A deny might come in during this validation phase. You might have some type of hook to evaluate it. Similarly with mutating, you have kind of an earlier phase that occurs to perhaps modify add in that image pull always rule into the pod spec. So that's kind of how you might see this flow of interactions and it's all before you get to the basic crud access on the resources. So similarly in some ways you could say admission controllers are a bit similar to kernel modules in so much as they operate with elevated privilege scope. They are probably best configured by a system administrator or someone who's got access to updating Kube API server flags and this is absolutely not a good way to package or distribute application code or even frequent changes to policy on your cluster. If you have to change flags on the API server, this is not a dynamic API centric way of making changes to the admission process, at least what we've seen so far. Also we're also going to likely require access to a modern API release for all of this to function as expected. So specifically if you want to use some of the new pod security policies those have been marked beta as of Kubernetes 119. Here's a link to the docs for that. If you want to use the dynamic admission control endpoints, you need Kubernetes 116 or newer if you want to use the V1 APIs. So to review part one, first I want to make sure we have got an understanding of how admission controllers are enabled, disabled and configured on a cluster. Hopefully I covered that reasonably well. Second thing I wanted to emphasize is how configuration of admission controllers can be used to enforce basic security policies for your cluster. There are two phases of in the admission control pipeline that you want to focus on for this talk. Validating and mutating. Mutating is going to come first. Validating will be the kind of finalizing phase. User experience notes. This is a little bit like a sledgehammer. Like I said, not designed to distribute quick, frequent policy changes if you plan to do that. So and I think one of the top use cases for my purposes at least is to enforce consistent operational rules for disparate clusters that are part of a release pipeline. That's always I want to have zero surprises as my code rolls out to production and so I want to make sure the policy matches everywhere whenever possible. All right, on to part two, dynamic admission control. This is where things get a little bit more complicated. Requirements for dynamic admission control. I mentioned you're going to need these two admission control plugins enabled or else dynamic admission control won't work at all. This is like one of those things. If you don't have the plugins enabled correctly in the first place all of your operators and extension points or even the whole API might end up in a read-only state if you have that deny plugin enabled unintentionally or something. So these two, you're definitely going to need these plugins enabled. You should have those by default with 116 or newer especially if you want to use that V1 API. That'll give you V1 versions of the mutating and validating webhook configuration data types or kinds. So one more point along this topic. Since you are setting up a dynamic admission control endpoint this is going to involve basically making a webhook call out to a service that you host and that service, the name of the mutating webhook or validating webhook must be a valid DNS name at least within the cluster. So you could use a service and a deployment but if this is expected to be up as a dependency of the API you, I don't know, we want to make sure you model that correctly having the API pipeline be dependent on things hosted by the API can lead you into tricky situations when you're doing updates of services and things. So watch out on that. For more information via the docs, definitely look at there's a whole URL there. Enable the extensionable admission controllers webhook configuration section in the docs. There's a lot of nice examples in there. This is one that I actually screenshot I pulled out of the docs. Here's the URL for it. So if you were going to register a validating webhook on the API this would give you a chance to set up your own custom dynamic admission control and in this case it looks like we're going to be watching create operations on type pods with API version v1 with some namespace scope. The DNS name of this service is going to be example service so hopefully that's resolvable and we have a timeout down here as well. So on these distributed systems you can't wait forever for the webhook to respond so it's nice to have values like that. Yeah, zoom in a bit on that. So how would that look in the API pipeline? Like I said before, requests are coming in at the top corner there. We're going to get our mutating webhook after it's been registered. This will make a call out to that service. Same with a validating would come in at a later phase in the pipeline so you can make sure you're validating things that have already been modified and that there aren't going to be further modifications in the pipeline. Once that's successful this will hopefully get written to EtsyD but this always should allow you to enforce how things get written to EtsyD including security restrictions on pods and other things like that. So dynamic input validation examples. What are the use cases that I might want to build a custom controller and make my whole API pipeline dependent on some new service? Sounds crazy. Why am I going to need to do this? Usually this is kind of operational policy that you might want to implement. This may very well break some operators. You know, I don't know if this is a good idea to say we're going to always reject all images tagged with latest but it's something you could potentially do as a policy. You might be able to actually do this via a regular expression via OpenAPI. Well, I'll talk about OpenAPI spec in a minute. You could also do things like, okay, I want to make sure my EtsyD member count has to be an odd number and it has to also be between 1 and 11. You know, you could have really kind of complex functional input validation on these custom controllers. If the custom controllers are really advanced, you know, you could use the Go client. It can even make further calls to the API to update other objects as well if it's a more advanced solution. So here's a webhook example. You can actually write this in a variety of languages, but here's a Go client example if you're interested in seeing the stock example. So to review dynamic admission control, we've got a couple points here. First, I want to make sure we all understand how dynamic admission control webhooks are used to validate or to coerce any write request as they pass through the API pipeline. The validate phase, like I said, will not begin until the mutate phase has totally concluded. That's any time mutates available, at least until that timeout hits, right? Review security use cases and implications. Let's see. Oh, yeah, I mean, before I move on, you could potentially with this mutating webhook do all kinds of crazy stuff depending how much access you have. You could potentially listen in on every write request to the API and forward it out to an external cluster if, you know, or other kind of malicious purposes depending. So be careful. You want to make sure that these are locked down appropriately and kind of standardized and that's why not all of it is even possible to update via an API, right? This is really about standardizing and securing your clusters. So part three, how do I avoid admission control and why did I show up? Why did I need to learn about this? Well, okay, so you want to make sure you select the appropriate abstraction and solution for your scope of work, right? So if you're involved in securing your Kubernetes clusters, there's a couple things to establish, you know, an operational baseline for a cluster. Definitely admission controllers are a huge piece of the puzzle there. You can also establish further operational rules for platform services via the dynamic admission control or by using operators and CRDs. One of these here has a, you know, they both actually are going to require admin access, but CRDs and dynamic admission control endpoints are possible to update over the API. So huge advantage in terms of how quickly you can iterate and change these things. Solution number three, you want to also focus on offering standardized application control. So for me as a developer, I can skip the whole admin access requirement if I focus on app CRs and Helm charts. That gives me simple ways to update my application code, my config, push updates into the cluster, no admin scope required. And if I'm on the end of the team, maybe a team lead or someone who's trying to standardize that app interaction, we can use CRDs and open API spec to help restrict and validate some of the input. And hopefully once you establish these mechanisms for standardization, you are able to apply them evenly across all systems in your release pipeline. So you have a consistent operational kind of situation throughout the whole pipeline, right? No surprises for developers, maximize productivity by minimizing bad feedback. So why should you learn about admission controllers? Honestly, the answer is so that other people on your team won't have to. Not everyone needs to know about this topic if you're in the business of offering a standard interface to junior developers or other folks on your team. This is definitely something you want to make sure the right person on your team gets this info and is able to standardize the experience for other folks. Should I avoid writing and maintaining custom controllers that may impact the operational reliability of the core platform APIs? Yes, whenever possible. Always avoid this if you can. There are other solutions. Should I use Helm or application CRDs? Yes, absolutely. Yes, use those instead if you can. Like I said, if Helm's an option for you, use it, right? If you are doing dynamic admission control or application CRDs, you can take a look at actually more dynamic admission control specifically in Go. You can start with a Go client, but if you want a more modern tooling that will help you build CRDs as well. Take a look at KubeBuilder for development in Go. This operator framework is another option that will allow you to develop in Go. And it actually uses KubeBuilder as of the 1.0 release of operator framework. So you hopefully get the best of both worlds. Ansible is another option. You can import Ansible roles and run them using an operator. Also, you can import Helm charts and then make them available using CRDs. Both of these solutions, KubeBuilder and operator framework, include support for using Open API spec for schema-based validation of inputs. And here's one more link to that Go client as a potential implementation option. Open API spec schema validation. So here's an example from one of my team members. This is a custom resource definition. Let me zoom in on this a little bit. Looks like we've got here, we could see there's a validation section. Open API v3 schema. There's a couple things that we're calling out that we're going to validate. If I scroll down, we can see here in the spec area, I'm looking for a value called size. That's an integer. And size is required. It's not a super complex validation and really is just validating the schema more than the values themselves. But this hopefully can be easily done using Open API. And that can be used regardless of whether you're using admission controllers or not. Any kind of interactions with Kubernetes API that's generally considered a best practice. Part three review. So we talked about using application CRs and Helm charts as an alternative for targeting developers. No admin permission scope required there. Using Open API spec to provide schema-based input validation for custom resources. You can use kube builder or operator SDK for custom validation and translation of API requests. Hopefully all of this together gives you a reliable distributed platform that allows you to focus on your day job. Just like I am struggling to focus on my day job. Goals for this talk. Hopefully you've come away with a clear understanding of the role of admission controllers. Typical use cases and when to avoid it. If not, I'll do one quick recap. Primary role. You've got a request validator, request translator. It's there to apply security and you want to have admin access. Use cases, mostly, you know, security, pod security policies are a new thing. Denying privilege, denying host path, host PID. You can do dynamic input validation. More complex validation using the tools I recommended kube builder or operator framework. And the goal of all this is to ensure consistency of your cluster and hopefully standardize policy between clusters in a pipeline. Alternatives. Definitely developers stick with Helm or application CRDs whenever possible. Don't muck up the API pipeline unless you really have to use open API spec to provide schema based validation and consider using kube builder, operator SDK, operator framework. Or evaluating a hosting provider that includes a strong set of admission control defaults and a clear plan for distributing updates. Some of the testing I did was on learn.openshift. That'll give you a free one hour session on an OpenShift cluster if you want to see what the admission control policies look like on that system. Go ahead and give it a try. Here's a couple links to summarize and a couple talks that I found very inspiring. Go ahead and hit pause on your video if you're watching to get all those links. I don't have time for questions today, but take a look at the chat. I'll try to answer questions in chat for the live folks. Thank you very much. My name is Ryan J. Ryan Jarvinan. Look me up online. Thanks all. Hope you have an excellent kube con. This has been admission control. We have a problem. Thank you all. See you next time.