 Yes, so welcome back everyone. We just went on a quick break because we had to get our speakers back here, but you would agree with me that since the beginning of today's section, we've had really exciting topics from introductory down to the expert level, and then we're back to understanding the basics of controllers and introduction to custom resources, which is going to be our next section. So buckle up everyone as we get our speaker on here. So the next section is going to be focused on extending Kubernetes and introducing custom resources, controllers, and operators. This is going to be taken by Ola de Paul Ajayi, who is a Cloud-native engineer. Currently at Continental Solution, helping businesses with Cloud-native transformation from beginning to the end and everything in between. It's good to have you here, DePaul. Thank you very much. Good to be here also. Awesome. All right. Yeah. Sorry for the delay, I think I am. I'll have more time to myself and interact, since I've started nearly. Yes. Yes, I feel so. Okay. Yeah. All right. Thank you very much. Everyone, my name is DePaul Ajayi. I'll be speaking on extending Kubernetes, introducing custom resources, custom controllers, operators, and schedulers. And if there is time, I'll be doing a simple demo where we're at it. Yeah. So for, like I said, my name is DePaul. I'm a Cloud-native engineer at Continental Solutions. I'm a Cloud-active engineer. I'm a DevOps engineer. I do everything on AWS and Azure. I'm also a software engineer when I can. I basically write Python and Golang. For now, I'm majorly focusing on building controllers and operators. Yeah, nice to meet you. So let's get to going. So yeah, what is Kubernetes? I don't want to focus so much on what Kubernetes is and its definition, but I'm going to mention just a couple of things. First off, Kubernetes, like we know, is a container orchestration tool. It helps orchestrate containers, manages them, and their entire life cycle. Pretty much like that's a very simple definition of what Kubernetes is. Like I said, some of the features are it doesn't miss deployments, helps with auto-scaling, helps with self-healing. So for example, if your pod is down, it cannot just spin it all by itself. And it also helps with secret management and configuration management and so many things. But this are the basic things Kubernetes do. Wait, yeah, we are missing something. What's that? Yeah, Kubernetes is highly extensible. Yeah, that's right. So you can always add your new features, you can add whatever you need to do. And that is one of the key things that people don't talk about about Kubernetes is for. Yeah, like I mentioned, all of those things that I mentioned or I listed earlier, it's really awesome. But Kubernetes is being a very highly extensible system and what I mean is that you can add your own features. And if you go to Kubernetes GitHub repository because it's open source, there are really no patches, right? No request to add like basic major features, right? So if I as a company, I need a particular feature, right? On Kubernetes and it's not very good in the default installation, I can always add it myself because of this extensibility. And what actually makes this happen? What makes this I as the extensibility happen is because the all of Kubernetes is built on APIs. So everything you're saying from your part, your services and everything at APIs. So everything is communicating with API. You're communicating with Kubernetes through the API. You're doing Kubernetes communicating with itself through the Kubernetes API. And that's one of the real beauties of Kubernetes that is not being spoken so much about. So yeah, extending Kubernetes. What does, what do I mean when I say extending Kubernetes? Majorly what it means is that you're adding a new functionality like I mentioned earlier that is in part of your default Kubernetes installation. So if I have, if I decide to install maybe Kubernetes 1.22 for example, 1.201, there are a set of features that come with it. But then there are times when what you need, what you actually need is not included in this particular in the default installation. And you can also always add something to this. So yeah, that's what extending Kubernetes is in here. There are several ways Kubernetes can be extended like a lot of ways. But for this talk will be focused immediately on including endpoints and adding functionality to the Kubernetes API. I'll explain what this means as I go on and to become clear as I move on. But majorly focusing on creating endpoints and adding functionality to the endpoints pretty much. Very, very simple. So let's take it back. Let's take a step back and see. We've already defined what Kubernetes is right. But then let's look at something. I'm sure a couple of us will be familiar with this image. So this image is the Kubernetes cluster and the components that make up a Kubernetes cluster. Now, what we're seeing here is we have the control node or control plane, which is the left-hand side. Everything we deem that blue dotted line is the control plane. And all of those things are things that come packaged with every Kubernetes installation. So we have our controller manager here. We have our HCD. HCD is more like a storage that stores a key value pair of all of the resources, all of the resources within Kubernetes. So your port, your services, your deployment, all the information that Kubernetes needs to know to keep state of that resource is stored in HCD. And then we have the scheduler. The scheduler basically is what the scheduler does is it schedules ports to nodes simple. And then we have our cloud controller manager, which is majorly for cloud providers. So for example, we're looking at AKS in AWS or AKS in Azure or GKE in Google. They all have this feature whereby it makes it easy for them to be able to connect to the baseline infrastructure. So it's not a major part of the Kubernetes cluster, but it's majorly for cloud providers, people that need it, like I mentioned earlier. And then we have our work and nodes here. And those work and nodes majorly connect to the cluster through QCTL and Qproxy. So pretty much this is what you have for every default installation. And this, as you can see, the API sits at the center of everything. And the reason is because that is more like like the update of the cluster itself. Every communication that happens within the Kubernetes cluster, from outside the Kubernetes cluster into the cluster. For example, as a user I'm using QCTL, everything goes to the QCTL API server. And the API server itself, it's what exposes the Kubernetes API that we've been talking about since I began the talk. So yes, what is this? What is the Kubernetes API? Yeah, it's a set of Httpn points, like I mentioned. Majorly exposed by the API server in the control node or the control plane, like I mentioned earlier. And it sounds like an entry points which users or entities connect to the cluster. So users by like the DevOps engineer, Cloud engineer or a software engineer or entities like services within the cluster or maybe a easy to instance for example, just an example that needs to communicate with the cluster. All of these things happens through the API server. And this API server, API actually coordinates all the resources or object that are assessed within the cluster. Now, as you can see in the diagram below, we can see this is very similar to the image that we had initially, but then there's an attachment to the right hand side and that attachment is modest like showing how the Httpn points actually looks like. So we have some set of endpoints like the else, the metrics. So things like these are like common to applications where you can check the else and then get logs and metrics from the cluster. And then for very specific things like for example, pods around to get the list of my pods. Now, for example, in my cluster, the endpoint that will go to majorly is the API for slash API for education one, first large pods and I can use my Http verbs, my get, my list, my create, my update on all of this. And then those are the things, this is majorly is very familiar with us, right? Especially for software engineers that are used to creating API is an endpoint and all of that. So this is what is going on in the background that we are not seeing like a typical Kubernetes user is not seeing. So yeah, here we have it. So this is an example of the Kubernetes API. Now down to what we actually want to talk about the, talk about resources, very, like we said, there's this thing is actually true that the pod is like the simplest units, right? Being a cluster, within a Kubernetes cluster, right? But that's pod itself is actually a resource. And I can mention all resources in Kubernetes are endpoints. So when you think about your node resource, you think about your pod resource, you think about your service resource, you think about your namespace, you think about your volume and so many of that things, your job, your current jobs, anything you can think of, they are all endpoints. Like I've mentioned, I try to view them here. But now that's what is there. To be looking at the right-hand side of the slide right now, we would see there is a point where we have a Kubernetes object and something that isn't a Kubernetes object. Like I mentioned, everything is a resource, it's a Kubernetes resource, but not everything that is a Kubernetes object. And the difference majorly is, objects are usually like persistent entities. So for example, I create a resource type, I have a resource type which is namespace pods, right? And I created a resource instance with my YAML file, I do QCCI apply and I create a pod, right? That creation of pod is a resource instance. But what comes out of it, right? That pod instance that has this entire metadata and the ID and everything being persisted in HCD is Kubernetes object itself. So in comparison to what I do, like a QCCI or API resources command, for example, what this does is it lists a set of resources for us, right? It's a resource instance, but not an object. And the reason it's not an object is because it's not being persisted in the HCD, in the cluster. So it's just doing something, but it's not actually an entity that's doing something that needs to be monitored or controlled or will go back to when we need them. So that's the major difference between a Kubernetes object and instances, resource instances that are not objects. Now that we have an idea what a Kubernetes resources, what do we mean when we say what is a custom resource? Because that is what this talk is about. How do we extend Kubernetes to be able to create custom resources? So what are custom resources? Custom resources is majorly at end point, like we've mentioned in the Kubernetes API that allow the creation of custom objects, like I explained initially of a particular resource type. So in the previous slide, you saw how we created a Kubernetes object, a Kubernetes pod object, right? With a resource type of pod. So we can do the same thing for our custom resources and say, okay, I want to create a custom resource called full, full bar, something. And I want you to be able to create a Kubernetes object of type full bar, for example. And how do we do that? How do we say, okay, I'll add this full bar resource to my end points, my Kubernetes end points. And the way we do that is we use a Kubernetes with a custom resource definition. So a custom resource definition majorly is like a template. And what that template does is it's used to add end points to our resource to a Kubernetes API. And that's what it does. It had end points and it had fields and metadata that can be filled into that resource end points. So for example, the image that we have here, we have the ML file, which is the custom resource definition. It creates a custom resource as an end point. That's what we have in the middle. And anytime we call that end point, we can use that end point, which is the resource and a custom resource. And we can use it to create a lot of objects, a collection of Kubernetes API objects of that's custom resource type. So I'll take it again. I would create a custom resource of a particular resource type. Let's say full bar, like I said earlier, through the custom resource definition. Now, if I want to create an object of that resource type, I would have to query the custom resource end point to be able to create that object and to be able to query it, list it, get it, delete it, just the same way an API would work normally. Yeah, so for this, this is what we have for custom resources. But then they really don't do anything. Custom resources are just labels or let me say like string, for example. Now, when you do maybe A in programming, when you do A equals to quotation mark, A, B, C, D, that doesn't do anything. It's just a placeholder, right? So custom resources are like placeholder. They don't do anything. They don't have any power of their own. They cannot think by themselves, more like a caricature, if I can use that then. But what actually makes them do something, like for example, our pod now, we know that our pod object takes an image, right? Runs that image and then exposes a port and then an end point that we can access, the application that is running within that port, right? What makes that possible? And what makes that possible are controllers. Like I mentioned before, when we're looking at the things that were in the control plane, right? We talked about the controller manager. So the controller manager contains all the logic that all the resources are using, all the default resources. So by default resources, I mean, pods, namespace, nodes, table sets, deployments, stateful sets, how are they reconciled? What did they do? So controllers actually do that for us. So controllers are like our control loops that track a Kubernetes API type to make sure that that current state matches the desired state. So one thing I need to mention before we continue is that Kubernetes was built on an events-driven architecture. And what that means is that a controller, Kubernetes watches for changes to a particular event and then triggers the necessary things that needs to do based on that event. So for example, if I do a create event now, if I do QCTL upgrades or QCTL apply and then I pass in a file, a pod file, and I'm sending in an event to the Kubernetes cluster, it picks that up, looks at my file, gets the resource kind, and then passes it to the right controller. And the controller knows what to do by creating whatever it needs to do. So for our pod example, it looks at things like this is a pod kind or pod resource type. And what it does is it takes the schema, the name, the namespace, the container image, the container name, the container code, for example, and does something which is, and what we have is we have a container, a Tucker container, right, running, and then we can access it with the necessary metadata that was passed in to our pod YAML file. So that's how it works. So for controllers, a controller is that black box you are not seeing, right? When we do QCTL apply or QCTL get or QCTL describe, right? All of those things are part of the controller and the controller actually maintains that for us. So controllers will look at our desired state. So our desired state is what we send into Kubernetes. And the outcast state is what is actually happening within Kubernetes. So for example, if I send in a deployment type and I specify I want to have three or three instances of a particular image running, right? So that's my desired state, right? By the time I send it in, it starts with zero and then it increases to one, two, and three and then it stops. If any of the ports is deleted or maybe it's down for whatever reason, the actual state and desired state are no longer in sync. So in the controller, what the controller does is it makes sure that it returns the actual state of that particular resource into the desired state. So it continues doing that in a loop. But then instead of doing a loop and trying to make sure that everything works, it can overwhelm the system. So that's why they opted in for an event-driven architecture, right? But when there is a change, an event is sent and then the controller can take care of whatever is happening there. So yeah, we've talked about controllers. Now, custom controllers, which is what we want to talk about, right? Custom resources, like I mentioned, are just endpoints created by custom resource definitions. So they're just like an API that can query objects within them. They don't do anything more than that. Custom controllers are the logic that bring those custom resources or the object they create to life. Like they become responsive, that's what I mean. And like I mentioned, Kubernetes uses a declarative programming model, which means that you need to take the viral state of the object you want to achieve. That is all you do. You don't care about what is happening in the background. Or you know that I have declared this and I expect Kubernetes to give me what I've declared, right? And because Kubernetes, custom controllers extend the functionality of the Kubernetes API on built-in resources, right? So I have a port now, for example, but I want to do more on it. Or I want to change the way ports behave. The way I do that is I create a controller that handles that logic for me, right? And that's what controllers do. They are used typically when a use case is not covered by the built-in controller manager for a built-in resource type. So very important for us to make true notes there is that they are used for use cases that are not covered by the built-in controller manager in our control plane for built-in resource type. By built-in resource types, the things I mean things like pods, namespaces, deployments, and so on. So like a typical example of when we want to use a custom controller is let's say that we have an application that uses a config map, right? So maybe send some environment variables into the containers that we are creating. So, but then there's always the possibility that those values can change. A typical manual process will have been if I need to make an update to my config map, right? I make the update to the config map with the key and value. And, but then what happens is that the pods that are running on that config map don't change because the values are being passed at runtime. Now for me to be able to make sure that the pods have the necessary things, I would have to go and stop the pod and then restart it, right? So that's the manual process that we need to do. Now we can create a controller that watches for the config map resource type and watches for changes to the config map values. When a controller detects a change to the value, it's automatically restart the pod for us. So that is one of the beauty of controllers. And that's why it has gained too much traction over the last three, four years because it's actually very, very, very reliable and very useful for some particular use cases like this one that we mentioned. So that's what custom controllers are. Yeah, in some way, custom controllers are written by you and what it's built in resources. So deployment, epic assets, and so on. And custom resources, they don't exist in the key controller manager. So what I mean is, like I mentioned, the key controller manager only contains controllers for default resource types, right? But then these controllers do not live in there. What that means is that, and they are independent of the cycle's life cycle, what that means is that I can deploy my controller somewhere else and it's just to be watching for changes to my Kubernetes cluster. So for example, if I delete, if a entire cluster goes down, for example, the controller has to be running. It might not just receive any event. And when you create your cluster again, you continue working. So it doesn't have to be deployed on your Kubernetes cluster. That's what this means. That's what this means. We can only separate it like a separate application that is clearing on its own. Yeah, so operators, yeah, so this is the next step to controllers and what they do. So operators majorly are built on top of the Kubernetes app abstraction. So automate the entire life cycle of the application they manage. So for example, we have Kubernetes resources that I've mentioned, controllers, concept for built-in resources like for a specific use case. Now, controllers are usually used for built-in resources, but operators majorly focus on custom object and custom resources. So like you mentioned, we have custom resources. We have custom objects that are created. The custom resource is being created by the custom resource definition. Now, when we take it a step back and we say, okay, yes, controllers control the life cycle of built-in resources. What about our own native custom resource types that are created and the objects we create through those resource types? That's where operators come in. So that's what they do. Custom controllers, right? They're pretty much almost the same thing, but they track and manage custom objects created by custom resource definitions. Operators are quite powerful in their use cases and they also help to automate the operations of your Kubernetes application, like more like an SRI tool and a software administrator for Kubernetes software application. Like the example I gave earlier, an SRI engineer will have needed to go and stop that board and restart it, right? But your controller, your operator will be able to do that for you automatically without you bothering or wondering about what's going on. So yeah, so this is the beauty of operators and why they are important, right? And why a lot of people have been getting on it. It's still going, it's still a very good community because it takes a lot of time to build an end-to-end operator. For the major operators that we have now, they're usually for major companies, cloud native, for maybe, for example, database is now for, let's say Postgres and DeRest. They probably are the ones using operators because it's quite developer intensive and time-intensive to build and to build the entire end-to-end process. But either way, we can always to build our own mini operators for the things that we need to do. Yeah, so for the last piece here, which is the course schedule as I mentioned, schedule as majorly or basically ensure that ports are assigned to the best possible available nodes. That's what they do. They don't do any other thing. They are not complex. That is the definition. Yeah, like I said, there's an in-built scheduler called the SKU scheduler, which handles most of the scenarios that we would need schedulers for, like most of the scenarios we need schedulers for. But there are times that there are use cases whereby the SKU scheduler would not be able to handle. And that's when we need to probably need to write our own scheduler and attach it to our Kubernetes cluster so that we can use that scheduler instead of the default SKU scheduler. So a typical example is maybe, for example, I have a deployment or a state-for-sets, right? And I want to be able to order outputs deployed across multiple zones. So let's assume I have a Kubernetes cluster, right? And I have multiple nodes. Let me select 10 or 15 availability zones. Across multiple places. And I have worker nodes in all of those zones. What is the scheduler? Direct scheduler that says, okay, this is the order in which I want these nodes, these ports to be deployed to these nodes. So I can say, oh, we need to go to availability zone A first, then go to availability zone Z, create the port there, then come back to B and go to E and come back to F. So that is a typical example of what a custom scheduler does. Yeah, so that's what it does. The use cases for this one are usually not as much or as visible as that of controllers or operators, but then they can also be useful. Yeah, so in the first side or in the end side, we have, Kubernetes really has become like a major tool in the cloud native space, right? When you say, and a lot of people would say, when you say Kubernetes, you mean cloud native, right? So cloud native is like equals to cloud Kubernetes and the other way around. Because of its use of use and ability to scale workloads effectively with its right range of use cases by its robust API. So as the adoption of Kubernetes actually increases, there will be scenarios Kubernetes is not equipped to handle by default, but then that's where extensibility comes into play. And we can also always extend it any way we want in whatever fashion that we want to extend it. I'll do a quick demo on custom resources, custom resource definition and controllers, basically before I go. Let me share the right screen. All right, so what I'm going to do here, I hope it's clear enough. What I'm going to do here is I'm going to show us how custom resource is created through custom resource definitions and we'll see how a controller is being used. So this controller here is pretty much basic and simple. And what it does is it creates an engine next image without us specifying the engine next image. And so it's just very basic. It just shows our controllers actually work. Now, like I mentioned, there are a lot of custom resource types that we have in the default Kubernetes cluster. So let me show you a couple. So we can see a lot. I know that we're able to recognize some of these things that you can see nodes, you can see pods, you can see secret service account and so this are the default values for these default resource types for Kubernetes in Kubernetes, right? And now we want to create a new resource type called engine next operator. And so let's do something real quick. Let's see if we can find it within our resource types at the moment. So you can see that there's nothing like that because it didn't return anything. One thing we can do, like I mentioned, for a custom resource, custom resource, so what we need now is we need the custom resource definitions, right? And that custom resource definition, let me look for it, yes. So this is a typical example of what we would have in a custom resource definition. So you can see it's a, and again, custom resource definition is actually a custom resource type also, which is, but that's what makes it really beautiful. So yeah, using an API endpoint to create other API endpoints, right? So you can see it has a kind of custom resource definition it has the name. So this is going to be the name of a custom resource type and you can see a couple of things that have been mentioned, the group it belongs to, the plural form, which is NNS operator, singular form NGNX operator, and we are specifying that it's a namespace resource type. There are two types of resource types, there's namespace and cluster-wide. So cluster-wide operators that you don't need to specify in a namespace to be able to access them. But for namespace operator, you need to specify the namespace if it's in or it's being the default namespace or the whatever default namespace we specify on our Kubernetes cluster. And it shows the version and so many other things. So there's a lot of things going on here. So let me just mention one thing here. We can see the spec and so if you've played a lot with YAML files, Kubernetes YAML files will be familiar with what your spec is, right? So you can see a spec is asking for things like ports, which is the ports of the NGNX operator and the replica, number of replicas you want to have to know that to script, just like in deployments, right? So this is what it does. It's something to notice the status, status sum resource. It shows us, for example, if I have, if I say I want to have three replicas, for example, and I only have one, right? It's going to be, so this is the status pretty much where the actual state of that resource type is recorded. So if it's not aligning with the desired states, then the controller will be activated. So this is what it does. As you can see, we can't find anything that's with NGNX controller, NGNX custom resource here. So I'm going to add it now. So I've created, I've added my custom resource definition. Now, coming back here again, I can let me do the same command again to check. So as you can see now, and I have an NGNX resource type here with everything we need. Now, what's the next thing we need to do? So the next thing we need to do now is create our Kubernetes object, right? So yeah, let's go ahead and do that since we already have our custom resource type coming. So for our NGNX, no, not this one. Yeah, samples. So as you can see, we have a kind of NGNX operator here with the name this. I'm saying let's have two replicas and have a pot of pot 80. Yeah, all right. So let's do that. Keeps it here, apply, dash F, config, samples, operator, no one. Yeah, this one. Now, what happens is we've created our resource object. So let's see it. Yeah, yes, we have our object here. And you can see it was created 15 seconds ago. But then, this is our object is supposed to do what, this is our object is supposed to create two pots, just like deployments work. So let's see if it creates a pot for us or not. We can't see any pot being created. And the reason for this is because we do not have any controller behind it that's listening to activities of this resource type to know what to do. So like I said, it's just like a placeholder. Nothing happens. Like you're not seeing anything what's going on. So if I do, for example, QCTL, describe NGNX operator, you can see at the bottom, you can see there are no events coming in at the bottom. So nothing is going on. We don't know what's going on. So how do we make sure that what we have is actually going, is actually correct, fine. We will need to create a controller and deploy our controller. So for this term, I'm just going to run it. Like I said, to just show what I've said, I don't need to deploy my controller on Kubernetes. I can just run it as an application. So for this one, we are using Go, right, to write our controller. And this is the logic here. This is very simple, very basic. Just, this is all you need to run it. A lot of things, I won't spend time to explain what all of this is doing. But what you just need to know is that this is what we need. We are making use of the deployment resource type to create our pods. So there's this layer of our NGNX operator custom resource which is built on top of the deployment resource just to be able to create our pods. Now, for me to run, let me do this, make, I'm waiting for it to start running. So it's running now. It got, as you can see, we know initially when we did QTL get pod, nothing was running. Now we can see our NGNX operator pod actually running now. Let's test one more thing. So show that it's actually working the way, so let's delete one of these pods. So if I delete one of them, QTL, delete pod, let's just copy this and paste. Wait for it to return. Yes. Wait for it to return one of the pods. Let's see what happens. So what that means is that it's going to change stage rights, our stage. So if you do, because this is quite fast, it will probably be done by now. So we'll be able to see it. So it's already running again in these first seconds ago. So if we have done something like QTL, describe NGNX operator, what's the name of our operator again? So right now, okay, it's not returning any events for now, but then what we've done so far is that we can see that when we created our custom resource, right? We created, it's did our custom resource and we're able to create our objects. But then until we run the controller, none of the things, none of the things, the messages we sent in was actually implemented. And until we did it. So this is more like a typical example of a use case. Might not be a very good example, but then it's like an early world example of how we can use controllers in our daily activities or in applications. So yeah, but so yeah. So what we've covered today has just majorly been Kubernetes, custom resources, custom resource definitions, custom controllers, operators and custom schedulers. Yeah, so that's all for me. Thank you very much for having me. Well, thank you so much, Dipo Ajiye, for that excellent section. And that was a really informative one. If you were able to follow up on the demo, you must have cut up on that as well. Thank you so much for your introduction.