 Okay, so welcome everyone. Thanks very much for attending my talk. Just in the beginning, the last time in this conference, you need to like see this information about the fire exit announcement. And you can read it when I will like introduce myself. So my name is Andrei Krasnitsky. I live in work, live in work in Minsk Belarus. I work as a software engineer in Altaurus. But maybe someone prefer to call it CloudFundry engineer because most of our work performed using the CloudFundry deployments. And today I will talk about the UI institution for Kubernetes cluster. I will explain how to use it. It will show the quick demo, how it will be could be implemented. And yeah. So like the brief agenda. So we will be talking about the authentication authorization options which is already built in Kubernetes. I will go more deeply in the option which allows us to use the UA component to authentication of the Kubernetes users. And I will show the demo with the like real life example of the running PCF environment and the PKS which is use UA certification. And then I will like as usual explore some, I will show some important links and resources which can be used which you will try to like implement the same authentication for your Kubernetes cluster. So let's start with the brief overview of what is Kubernetes. So Kubernetes is open source container orchestration platform. It can be deployed to any IAS. So it's multi-cloud solution. It's container-based solution. Basically you will deploy the Docker or AKT containers on the Kubernetes. And also it has rich API support. So almost every operation against the Kubernetes cluster could be performed using the API. And now let's jump to authentication authorization options or OSN and OZ as they usually called when we use in the Kubernetes. So authentication is the process which determines who you are when you provide some your identity. And authorization perform when system is already know who you are, but it's try to retrieve some additional parameters about your user and like say that you allowed or not allowed to perform some action based on this information. The good example for the OSN and OZ in the real life could be visa and a passport on the border control. For example, officer will check your passport to see that you are the person who like pretending to be and OZ in this case will be the visa which shows that you allowed or not to go to like destination country. Who is the actual consumers of the OSN and OZ in the Kubernetes cluster? So first of all this is end users which is usually use the kubectl command to communicate with the cluster. And the second option is the machine to machine communication. So it's all internal workflow use also the OSN, OZ to perform. So it could be Kubernetes parts, it could be so called control plane which if nobody knows what maybe someone doesn't know what is control plane is the component of the Kubernetes which like drives the all internal workflow of the Kubernetes cluster. It includes the API server control and scheduler and some other components. Let's try to understand how the OSN and OZ performed in the Kubernetes. So first of all when operator or like some end user try to perform some action against the Kubernetes cluster your request comes to some authentication plugin and I should mention that plugin is something not similar if maybe good example will be Jenkins. You can't deploy the plugin for the OSN or OZ on the fly. It's pre-built in the Kubernetes API server binary so you need to like create your own binary, update the cluster and then you can use your own plugin. By default we have several plugins built in the Kubernetes but I will talk about it later. So first of all our request code comes to the authentication cluster based on the user ID and the password which retrieves by the Kubernetes API. It shows that we're allowed on not to work with the Kubernetes cluster. Then this request comes to the authorization plugin and at authorization plugin we can see that we are allowed and denied to do some work against the Kubernetes cluster and then it comes to the admission control component. It's like some additional component which can mutate or perform some modification in the API request which is coming to your Kubernetes server and then it will go like to direct it directly to the some component of the Kubernetes which you're trying to retrieve for example it could be parts or some API information about your users or something else. So let's continue with the authentication strategies which is already presented in the current build of the Kubernetes. I will like describe any which is available right now but we'll like show some information about most preferred in our UA use case. So the first one is the X5.0 client certificates. Most of our installation before were used this type of authentication but it was like bad solution because we use the only one shared certificate for all clusters and we know like this is terrible solution and that's why we like start the researching how to use the UA and certification. So basically here we have some certificate authority and we have some client certificate which is signed by this certificate authority and when we create some request to our API server we see that hey this is this certificate is signed by this authority so we can work with this cluster we actually trust this sort and the username in this case will be the common name which you can retrieve from the client certificate you like it will be more important we will go to the Aussie option in the Kubernetes. So next option is the static password or static token in the case of Kubernetes this is just CSV files which is placed somewhere on the file system they have this secret username combination and if in this case your username is admin and you use this super secret password as a secret password then you're allowed to use with the Kubernetes and it's pretty much the same for the tokens. We have like combination of the user and token and better after that we can use this token using the kubectl command to like perform some action against the Kubernetes cluster obviously it's not the best solution but this can be good if you want to like spin up the quick server perform some action verify something and just for testing I would say. So next option is the service accounts. Service accounts is usually used for some non-interactive workflow with the Kubernetes the good example and if you will open the documentation for the Kubernetes it will be Jenkins server for example you can create that service account using this kubectl create service account command and it will create some secret which will include the API token and the certificate for your workflow then you can add this token to your or some long running process and just work with the Kubernetes using this token so all you need just to provide this token to the authorization header inside of your API request and that's it. The next option it's like pretty new option but somewhere may use it it's called the webhook tokens and this option is usually used when you want to use some external service to perform authentication options actions against the Kubernetes cluster so how it works first of all end user put some request to the Kubernetes cluster with the token and Kubernetes cluster will perform the review action against some existing external service and based on the information of this service it will receive the review status and like allow you or not to work with the Kubernetes API and the most important option in terms of this presentation it's so-called open ID connect I will give the quick review on this so if you ever heard about the open ID you should know that this is nothing to do with the open ID open ID connect is just extension on top of OAuth 2 but instead of like usual workflow of OAuth 2 it's retrieved some additional tokens which you can use for further authentication and this token is JSON web token it contains information like user mail which is very convenient when you want to perform this authentication against the cluster so this is like may look like a mess but this is how JSON web token looks like it consists from three parts the first part is the metadata it shows like how our JSON token encrypted the second part is contains the metadata for our web token and it's usually called the payload and it contains the all information about our user about the identity provider and so on and the last part is a signature which is used to verify the token against some existing density provider so this is if you will try to encode the second part of the JSON web token it may look something like this so here we have information about our user and it can be used like we have the email field we have the username field we have our idp provider so it's all information which is required by the any open ID connect integration so by default and not by default Kubernetes doesn't provide any identity provider from the out of the box you can use some existing for example google microsoft yahoo paypal amazon they all provide some identity providers you can use these providers and also you can use a couple of self-hosted if you will try to deploy it on your own and like dax is most common case and your way also can be used as an identity provider for kubernetes but if you will you may google for existing open ID connected identity providers you can may found some more but you need to know that identity providers need to have need to have two requirements to work with the kubernetes the first one is that identity providers should publish all the metadata information on some well-known URL which is used for discovery and the second requirement that open id provider is you should run always in tls mode i will just briefly describe what is ua because like it's just open id that is a provider which we can use for our cluster so ua is the service which is usually used within the cloud funder's deployments where a user authentication is managed it supports the it's basically allows to server it's can be connected to some external summer old app or you can use it as open id provider it has reach api support you can perform any action using the ua api and yeah that's like let's go to the workflow of the authentication in the case of the open id and the kubernetes so first of all our user will send the login request to our identity provider and if this request is successful as a response you will receive the access token and id token which we will use to authenticate against the kubernetes so you need to provide this information to the kube config you can configure it the config in your home folder if you prefer or you can use dash dash id token option in the kube ctl then regarding on this information this json web token will be sent to the api server and api server will perform some additional checks on this json web token it's includes the validate the json web token signature it will check that maybe this token is already expired and it will check that it's actually this user authorized to do some work against the kubernetes api then if everything is okay it will receive the response to the kubernetes kube ctl command line tool and this tool will return the kube ctl this response to the end user so now let's continue with the auzi options in the kubernetes it once again what works in the following way so when you already authenticated in the system kubernetes api will perform some additional checks that you allow to access the resource which you're trying to retrieve for example i will go to the like most popular resources for the auzi in the kubernetes which is you may know with they already built in the kubernetes of some of you using this release so the first one is the abec abec is distorted in the same way as the password or static tokens this is just plain file csv file sorry not csv just file on your file system and its format is one json byline so you can specify some existing groups or user and specify that this user is allowed to perform some action against some name space can access some resources and so on and when it's gross you can have something like that for example in this case we have the group developers which can access the any resources in the def space def name space but we also want to like be able to developers to access the production name space but in this case they will have only read only mod so we can be sure that developers can watch logs for example but they can perform any action like delete or something like some harmful for our cluster and the second option is the airbag so central concept of the airbag is roles and this is essentially just a collection of the rules for example you can have developer rule which has one set of permission for your cluster and you can have some administrator role which have like full access to your system roles are usually not usually it's always allowed to work with some existing name space you specify it in the this file and if you use cluster role they apply it to cluster wide and second concept is role cluster binding or role binding it works in pretty much the same way as the roles you grant permission to some previously created role and for example in this case we have a user under a cross neat scale torus call which can access the any any resources which is described in pod and pod logs reader binding so you have some role you bind some group or user to this role and like allow to work with some resources on your kubernetes cluster and here's some like small comparison between the airbag and a bag so once again that to manage a bug you need to manage the file on your local system it's maybe not the best solution maybe in the case if you working with the Bosch you can like specify some properties in your deployment manifest but if you use like plain kubernetes deployed by some maybe you deploy it by the hands it may be like hard to configure this file and the second like not preferred way to use a bag that a bag required to restart API server every time when you are trying to update the policy files and in this the same time airbag can be managed by kubernetes API you can apply changes on the fly so it's like default approved action approved rule to work with the kubernetes and most of for example kubernetes cfcr pks they all can be configured with airbag option so now i will show the quick demo how open iz works with the kubernetes let me share my screen sorry second display it's okay can you see okay so i have my pks installation running on gcloud but it's a bit modificated installation i will talk about it later but and i have my pcf running in aws so i will create the new user on the my pcf installation let's call it summit 18 na for example and the password will be the password okay i will not assign any roles because it will just use the username and password combination and the kubernetes and now we need to retrieve the id token from our ua i will use small library any open iz connect providers have this library on some github page it's used to retrieve this id token and put this id token to our cube config file so provide just the name and password to this helper i will share the tool later yeah it shows the id token for our client we also have here the refresh token and after that we configured our username in a cube config so now i need to set context for this user so i will use the config context name is summit 18 na i have only one default cluster and i will and the user the same yeah now i can try to perform some action against the kubernetes cluster and it will show them oh i need to use this context sorry on use yeah now yeah it retrieves there that we're not allowed to work with the kubernetes api because our zplugin shows that hey users summit 18 na doesn't have any rules configured in the our airbag so now i will switch back to the default user which is admin user uh i have this ua i have the user role bind file which is used to configure the uh user role and user role binding so i have the already running cluster admin role it's like default role of the kubernetes cluster and i will just bind our new created user to this cluster role so it's will be summit 18 na to create this role binds and use the same command that you used to like run and jinx for example on your cluster just create the chef and pass to your manifest just change some names we can switch back to our new user and yeah we now can work with our kubernetes so the configuration is really easy you just use existing user from the cloud fund we create the uh roles and role binding for this user and like that's it so let me open back my presentation okay so uh how to configure an open id connect if you use uh some implementation not deployed by bosh so you just need to provide the open id url for your is on cc cluster usually it's your way dot system domain if you're using some existing pcf installation you need to provide some client id to verify the signature of your uh jason web talking you need to specify that we use a mail as a username it's the fault requirement when you use in the ua and if you use some sign uh self sign certificate you need to also provide the certificate to cluster installation and if you're using for example kubernetes these two commits includes all this uh configuration in the uh manifest so you just need to provide a couple of properties like any other board release and then you can use this with the uh kubernetes but one more thing that currently your way uh is not fully compatible with the uh open id connect plugin implementation in the kubernetes it's already patched in the development branch but we still need to wait for like next minor release it will be included in this release so yeah like benefits is of the solution that we can use the single entry point for bosh installation for cloud foundry and the kubernetes uh open id connect includes discovery so we can uh use this discovery to verify our jason web talking against the our identity provider we can uh retrieve the uh some exploration data from decent points and it's really easy to configure you just need to provide a couple of additional flex to your kubernetes cluster and it's also minimize the security risk because many users try to use the same password uh for like one login and in this case it will be like single sign on for both of your cluster so and if anyone won't try to deploy it on your own you can use the kubernetes you can use this helpful tool which is used to retrieve the id token and access token from my open id provider and also you if you need some help you always can like open the slack you can go to cfcr channel the cloud foundry slack you can go to uh seagulls on kubernetes slack and you will always find some information about how to implement some kind of indication on your cluster so this is it maybe you have any questions you mean to use some house plug-in for kubernetes they already pre-built in the kubernetes api binary so they already in the kubernetes but in this case as i said for uo a you need to like build it manually because not all patches included in current stable release so my installation was used was installed using like pre-compiled binary so i pre-compiled that master branch it's just a master branch it's like another version so that's it thank you