 Hello everybody. Welcome to implementing microservices as Kubernetes operators. Get out of the way early. The course slides are going to be available. We've got a demonstration, the source code of which will be available as well. We've got some QR codes at the end in links. So you're welcome to pick those up at the end of the deck. Before we get into it, I have a couple of questions. It would be interesting to see the number of people who raised their hand for this, given where we are in Dev Conf and how often I've heard that Kubernetes and other talks. So who in here has used or heard of Kubernetes? Alright, a lot, that's expected. Know about controllers and operators? Alright, a little fewer hands. How about microservices? You're actually in the right place. Microservices? More, okay. And then, who's written a controller or an operator? You're ready for work tomorrow. No, I'm not ready. So today, for those who maybe didn't raise your hand for understanding Kubernetes, we're going to go through a high level of what Kubernetes is. Understanding of operators and why we believe operators are microservices. And then finally we're going to end up with a demo of creating an operator building it, deploying it, and testing it here in the room live. Could we do that? Yeah, well, the demo guides hopefully will allow us to do that. So, I'm Naveen Malik. I'm a senior principal site reliability engineer on the OpenShift SRE platform team. Background is software engineering, software architecture. Father of two little boys. I run a whole lot and sometimes make things with a overly expensive 3D printer like I used to make toys for my kids. I'm Lisa Sealy. I'm a senior SRE on the same team as him. It's the word Sal, so I'm not going to say it. I come from a primarily system and background with systems administrator software engineering in there too. I'm coming a Canadian, I hope. I love my cat. Can't wait to get home to visit him. I like all cats. So if you have cat pictures, I want to see them. If you have cat pictures on an arm architecture, I'm into arm too. So show me your cat pictures on your arm clusters. That's me. So I don't have any cats right now. I do have a couple of dogs. I think it may be acceptable. The guy on the left here, Sprocket, is a Japanese chin, which is the most cat-like dog we could find. So basically, he ignores us unless he wants something. Show of hands. Is that okay? Cat-like dogs? It's as close as we could get. My kids are allergic. So Kubernetes, if you're not familiar, haven't heard about it yet, quick overview. It's a platform, an open-source platform, for managing your containerized workloads. It allows you to declare some configuration. And then the platform realizes that as the running state for you. It comes with a plethora of resources as a part of the platform with applications to software to manage those resources for you. And it allows you to extend the platform with customizations. You can do custom resources and then custom business logic around those resources so you can add things into the platform itself to make it work for your business needs. Real quick, why we're here and why this talk is really exciting for both of us is that on the SRE team, we manage all of Red Hat's OpenShift-dedicated offering. We manage a large number of OpenShift clusters with our flavor of Kubernetes for our enterprise customers. And we do operators for all the things. So I'll get into a little bit of the history of that later on. So OpenShift is to Kubernetes as well as to Linux. And our OpenShift-dedicated offering is a hosted service that we manage 24 by 7, follow the Sun support. I got paged in the middle of the keynote on Friday as an example for something blowing up. Quick plug, we are hiring. We have 12 Brex Open as of the 15th of January. So there's a booth. If you're interested, stop by to the Red Hat booth and ask or come talk to Lisa or myself. I'd love to hear about your interest or if you have somebody you know that might be interested. So Lisa is going to walk us through how Kubernetes does its thing. Before we get too deep into operators and all that, I want to review quickly how Kubernetes is applying our changes and how we tell Kubernetes to make changes for us. So when we install Kubernetes or OpenShift out of the box, we get a whole bunch of resources for free. We use these for express your applications. So that's like storage, configuration, how it's scheduled, and all of that good stuff. One resource that we get that is critical to this talk is called the custom resource definition. What it does kind of gives it away in the name, but we're going to be talking about this a lot for the rest of the talk. So how we use these resources is we declare a YAML, we write YAML and feed them into the Kubernetes API with the kube. So we just learned how to say this. It's kubectl, kubectl, kubecuddle, kubecuddle. It's actually like the preferred way of saying it now. That's a new canonical, but all are acceptable, we're told. So we dump our YAML into the API server with that, but then what happens? Well, as it happens, just a clearing YAML doesn't do enough, especially as if you're like me, you have tabs in the wrong places, or spaces in the wrong places, or tabs and spaces in the wrong place, and the API server cares about that because controllers are the piece of code, the piece of logic that is watching all of these resources that they necessarily care about, and we'll see what that looks like soon. That's the logic takes, I want this from YAML and makes it have it inside the cluster, right? So the next step is let's have a look at how operators are doing the same kind of thing. So as I mentioned, we get the custom resource definition type out of the box with Kubernetes, but that's up to us to make new custom resource definitions with that, and we'll see what that looks like too. And they follow the same pattern, where we're creating these with YAML, and we have controllers that we're going to write, and maybe we'll show you how to do that a little bit later, to turn them from YAML into something that makes sense for us inside the cluster. Now, let's have a closer look at how a controller makes changes. So the life of a controller looks a lot like this. At the start it's just hanging out, just waiting for changes. It's in a loop. It's keeping an eye on things. This particular controller, I just picked out of the blue, deployment. I like deployment controllers. Pretty cool. Let's see what happens when something changes with a deployment object. So when a change is made, the controller is going to be notified of it, and then reconcile the cluster towards that new desired state. Our request here is to create a three pod engine X deployment running version 1.7.9, and so the controller is going to do all can to make sure that happens. This is a pretty simplified example, but the forward picture of how deployment works is on the Kubernetes.io website. And the YAML here is intentionally short because it's as long as my arm to do all the things that you need to do. But when we apply it, sometimes there's an error in it, right? So in addition to pipeline with spaces, I also sometimes hit the wrong key. And it's up to the controller to handle that. Maybe the controller will spit on an error. So if you've had an error in a deployment, if you're creating Kubernetes stuff, it'll say, hey, crash with backup, crash with backup, crash with backup. That's kind of the same thing here. It's up to us, the administrators, to go in and fix it and then make a new object that the controller is going to pick up and then we get our pods. It's pretty neat. And then what happens next? The controller just sits around waiting for changes. So this is deployment. This comes out of the box. We don't have to write this. What does it look like for an operator, though? Well, at the start, you have a controller that we write that just sits around watching resources that it cares about waiting for changes. And then the code that we write in the controller is going to realize those and make the cluster the way that we want. And I think Navin can tell us why someone would want to use this operator thing in the first place. Thanks, Lisa. And actually, I want to talk more about the microservices side of this and why you might consider using an operator for that domain. So just a quick overview of microservices. It's an architecture that's been around for a long time. And I come from Red Hat IT before I joined this team and have written a lot of services in that domain. And basically, we can call anything we want at microservice just how you want to spin it. But there's a few facets that are fairly consistent across any kind of definition. And for the purpose of this discussion, we'll talk about the context. So microservices have a bounded context. It simply means you have a small business domain or a very small use case that you're trying to work for. You have to communicate over a network because you have lots of these things. They do a very specific task. They do it really well. But that's usually not valuable in the context of some kind of business problem that you're trying to solve. So you need to have a lot of these talking to each other. And then because you have a lot of these and maybe you didn't write all of them, maybe you bought some or other teams are dealing with them with technology agnostic protocols that they can communicate over. Is that like HTTP? Exactly. Postmail? HTTP. I think RESTfulService is an example. And then you organize these things around some business capability, a thing that you need to solve in your business domain. And that thing becomes an independently deployable component. You may want to have some subset services running in a Kubernetes cluster. Some may need to run in your data center on bare metal depending on the workloads. You need flexibility and they need to be independent. And usually you have some other things that you're concerned with. Security, there's a couple of facets here. One, because you have this small context, the surface of attack or potential abuse for this is much smaller because you're dealing with a much smaller domain. Additionally, the micro services or services that I've ever developed or what we do in our day to day need something to keep unwanted users from getting in and using that service. And other things, I'm going to enumerate all availability, scalability, you want these things to be online, you want to be able to grow them if your service is successful. So how do operators meet these needs? Well, as Lisa mentioned with customer resources definitions, you can define a very small context. You have a very narrow scope. You get the platforms network to communicate over. And you get the platforms API. Kubernetes provides you an API to interact with these resources. You can have controllers that are focused on a specific resource, be it a stock resource or a customer resource. Is that like the deployment? Only cares about deployments? Exactly, the deployment controller you better care only about the deployment resource. You might have a different controller for dealing with secrets, something else that might deal with pods, etc. And you can bundle these things together into a unit and deploy them as your operator. So like multiple custom controllers into one thing? Right, if your business domain requires you to deal with multiple resources in the same bound to context, you can bundle those into a single operator and deploy those. And then Kubernetes brings the table just a few things for this talk, the authentication authorization Kubernetes.io Kubernetes.io everybody, lots of information there. Authentication authorization, you get a lot of features from the platform around users and service availability to access resources within the cluster as well as what you're authorized to do. I can get access, I'm allowed to do something with a specific resource. The platform can be deployed in a very highly available manner, which allows your deployments to be highly available, and then it is built with the schedule to enable you to scale up your workloads across that platform. Operators are microservices. Cool. But they can do a lot more. So I want to talk a bit about configuration management and a little bit of a story of how we got to where we are and why we are very excited about the operator space. So we started off, so our team has been using OpenShift since the early days, version 1. And most recently on version 3 we had a centralized configuration management system. We decided to go with the Ansible-based solution that was provided with OpenShift in order to configure all the overshoot dedicated clusters. And this works great when you have a single cluster easily manage it. Add a couple more it's growing, it's scaling, it's keeping up it's not a problem. When we start to get into large numbers of clusters it becomes pretty complicated. We have a lot of things that we're configuring it becomes hard for onboarding new members to the team. It becomes difficult to understand when my change that I've merged is actually going to run. This thing we called it config loop could take 12 hours to execute. Where is it in that execution? When is the cluster that needs this hotfix going to have that configuration launch? Should I kill the existing config loop process start a new one? All these types of things it complicates the landscape a lot. And then failures. Failures can now cascade into other places. Failure in configuring cluster 2 could potentially ripple into housing cluster 3 or other clusters not to update. So it's a problem. One light goes out they all go out. Hopefully not quite all the way. So where we are today is we are using microservices based approach with operators to implement a distributed configuration management platform. So we still have something in the middle that needs to apply some desired state to a cluster. But it's expressing that desired state. It's not expressing or configuring the running state. We need a controller or a series of controllers to do that. And as we scale this out it works really well because we are dealing with that one slice of the yellow on the left hand side here of just defining what we want the cluster to look like. And if something fails we have isolation. There may be a failure in a single cluster. We can deal with that by some metrics that bubble up into an alert that might page the correct SRE team to respond. And it's a lot easier to understand. Everything is much easier to onboard new team members. So microservices or operators are allowing us to move towards operations as code. We treat all of our operators as any other software engineering project. Actually if you are interested they are out on github the OpenShift organization come talk to us later if you are interested in taking a look at them. And it makes it a lot easier to onboard new members. We understand what's happening a lot. It's easier to manage the platforms etc. So hand over to Lisa who is going to walk through some examples of Lisa. That's right. So on the first keynote on Friday we saw a couple of operators that we are using. We are going to take a look at some other ones. But I am going to take a look at it from kind of an operations point of view task oriented. We need to do these things. The things we are going to look at are installing and configuring software, provisioning cloud credentials and provisioning TLS certificates inside of a cluster for like inter cluster communications. The first thing we are going to look at is installing and configuring software. We are going to use cluster monitoring operator to do that. Now, the old days when we are installing software don't have operators. So we have to install it by hand. We need to figure it by hand. We need to install all of the pieces that we want to make up our monitoring stack. Since we are in Kubernetes and the deployment type objects don't have a concept of watching for config maps where we are storing our configuration we have to write the glue to tell the deployment or tell Kubernetes to restart the pods with this new configuration. Every time we change something so every new rule that we write for Prometheus we need to restart that by hand somehow or write the glue for it. So let's look and see what the operator is giving us. So we can say that the cluster monitoring operator first and foremost is installing these things for us. That's the easy part. The secret sauce really comes with Prometheus operator that comes along for the ride with the CMO cluster monitoring operator. Prometheus operator in turn is giving us a couple of cluster resources specifically Prometheus rule on how to configure alert manager and Prometheus itself. This is great because now we can interact with these Kubernetes objects and let the operator worry about how we turn that not we, how it turns it into configuration files for the services and then restart those services as it needs to. Let's see what that looks like. We're going to walk through a Prometheus rule a little bit at a time because it's a little overwhelming. The operator has controllers that is watching for Prometheus rule type objects, kind Prometheus rule. This looks like almost every other Kubernetes object out there. We have the kind, we have metadata, we have a name, namespace, all that good stuff. Labels even. Let's look at the spec. The spec we have this is different. This is specific to Prometheus rule. We have groups which ties into Prometheus rules as it gets rented out into the Prometheus FIG file, have rules, a number of rules inside that group and we have our alert node, kube node, unschedulable SRE, guess what it alerts on? kube node is unschedulable. That's easy. When we create this object in the cluster, the operator, the controller of that operator will render this into config and that's it. When we're done with it, we delete this object in Kubernetes and the operator removes it from the config file, restarts, Prometheus we're good to go. That's great. That's so much easier for us. Next thing we have to do all the time is provision cloud credentials. We do this with cloud credential operator. What's the workflow we're most familiar with when we need credentials in a service? Well, you need to figure out who in the organization should get that request and then ask them for it. They go off and make sure you're even allowed to have those permissions, then they give you that request, I hope. Now you have these encrypted credentials and you have to figure out how to store them securely. You just put them in git, right? No, you don't want to do that. So you have to figure out secrets management too. Nope, that's easy too, right? Then you have to get the credentials securely into your cluster and keep that secure but that's doable too, so that's cool. And then finally you have to figure out how to scale that across every cluster all of your users and all the cloud providers. Well, look, how do we do this with an operator? Well, the operator introduces new resource called credentials request. So we're to the Prometheus rule this abstracts the notion of requesting cloud credentials in a pretty focused way. That means that the workflow is now, user creates a credentials request object. The operator is watching for it. With the request the operator talks to the cloud provider, Amazon in this case, and stores credentials in a secret. Cool, I don't have to talk to anyone. You just need to make sure I have permission to create this credentials request object. Okay, that needs to stay outside. Stay out loud, but what does it actually look like? All right. Yeah, and that user can access a secret. What does it actually look like? Let's look at a credentials request. Just like before, we have standard Kubernetes type stuff. We have our spec where we want to store it. In this case, we're storing in the aptly named secret to store credentials secret. The operator is going to see what permissions we want. In this case, we want to easy to describe all of our instances. Pretty reasonable things to be doing. And the output, the operator is going to create a secret for us and is going to access these. Good luck decrypting these. It's totally secure, I promise. And that's it. We have freed up human beings to do creative work. Since this is in all of our clusters, we have used the access permissions that they've been mentioned earlier to control who can actually request this. Which is great. Humans can focus on creative stuff. Next, we're going to start talking about managing certificates. So, you can talk to a Kubernetes service with encryption. We do this with service CA operator. This is different than what we learned about on Friday, by the way. How do we do it without an operator? This is actually somewhat easy to do. You connect to the cluster and create a CSR that's a certificate signing request. Then, you still connect to that cluster and improve it with kubectl or kubectl, whatever you got, approve certificate signing request. Next, the credentials are then associated with that CSR and you have to keep them safe. Not too bad, not too bad. You have to figure out how to scale it over all of your clusters. And who has access to all of the clusters in your environment at any given time to do this for someone? How do you get their credentials? How do you keep those safe? So, that's still another problem. All right, operator, help us. This is actually a different approach to it. There's some resources with this. All it's doing is looking at annotations on native objects. Service as a native object. All this operator is doing is looking for this long annotation, which I'm not going to read all out, it's kind of a tongue twister. What it's doing is this controller that is from this operator, we'll see this annotation, spit out the private certificate and the key in that certificate, stuff it into a secret and then it's also going to dump into a config map, which has this kind of annotation, the CA bundle. And now your deployment, can use that, mounted, and your gunicorn process can serve PLS, secure, signed by the cluster and you can connect to it. Which is important if you're doing webhooks. Ask me how I know. Talk to me after this talk. So, that means our operators increase our velocity, our scalability, and the availability of all of these tasks that we need to do. Because Kubernetes knows how to scale these and keep them running. But, what's next? I mean, can you show us how to make one of these? Sure, I'd love to. So, I'm going to walk through a bit of a contrived example as I'm very clear in noting up here. We're going to create an operator called pod operator, but why? What's the purpose of showing everybody here how to create an operator? I want to make sure that we have an understanding that it's really easy to get started with creating a go-ling operator. Get a sense of how to create a custom resource definition and a custom controller. And, given the time we have, we can pack it in quite easily. It's fairly simple as you'll see. Before I write anything though, I want to know what is it supposed to do? Even for a demo. So, there's three key things that I want this operator to do for us. Versus, when I create a custom resource, I want it to create a pod. I want to ensure that pod exists over time. So, if that pod happens to go away, bring it back, please. And then clean up. So, if my custom resource is deleted, I want any of the resources that pod in this example to be deleted as well. And, I'm going to use a couple of tools and I want to get into all the details. You may have heard of some of these in other talks today, but a quick summary. Operator SDK. It's a software development toolkit for building Kubernetes applications. Podman is a demon-less container engine management tool. We can basically do a podman push to get my image available to my cluster. And then, CubeCuttle for managing the Kubernetes cluster. And before we dive into a terminal, which I'm going to do, I'm going to walk through the steps that we're going to do and give you some context without all the fluff of stuff actually happening. So, we are going to use the SDK operator SDK to create a new operator. I'm going to call it pod operators. You can see on the right hand side there. We'll use the SDK to add an API, which is our custom resource definition. Do you know the specification, the spec, the details of the desired state are going to be empty initially? We're going to create an controller, because the custom resource definition is not that useful unless we have some code that's reacting to events around those resources. This controller gives us a lot out of the box. At this point, I've not edited any line of code and it knows to watch my custom resource. It's a contrived example, so it by default will watch also pods. It assigns ownership to dependent resources that get created, which I'll go into a little more detail later. And it has basic frameworks for how to create a pod within your cluster. Well, that's nice. It's not all that I want. We're going to go in and edit the code. We're going to customize the custom resource definition to add three new fields, the name, image, and command, so we can change what it is we deploy with our operator. And then we'll edit the controller to utilize those fields, so we actually do something with them. We will use SDK to build an image and then to push that out to quay.io. And then finally, Q-Cuttle to actually deploy this operator into a cluster. We'll wrap up with actual testing. So, great, it's deployed. Well, I'm going to show you it actually working. We're going to create the custom resource called pod request. We'll show that a pod gets created as a side effect of that, because we're going to write that, or have that controller code deployed in our cluster. We're going to show that when we delete this pod that was created by the controller that it will come back. It's our second requirement. And our third requirement, when we delete that custom resource, we show that yes, it actually does go and clean up our pod. All right, demo time. What could possibly go wrong? Just finger to the demo gods. Okay. I'm in the wrong terminal. So, real quick, the top portion of this is where I'm going to be showing running commands. Bottom left is the logs for our operator, which as you can see it's waiting for the operator. We haven't deployed anything. The bottom right, we are watching for pods to show when the operator itself and the pod that we're creating as part of our controller are deployed. And watching for our custom resource called pod request, or the plural in this case pod requests. As you can see right now, it doesn't know about that. That resource doesn't exist in the cluster. So, that's a good starting. Got everything cleaned up from. So, I'm creating the operator now. It's pretty quick operator SDK, new name of operator. So, now we have a new operator. It's called pod operator. It's that quick. Next, we're going to do probably the slowest step, other than pushing out the quay is creating a new custom resource definition. So, we're calling a pod request at API. And what this is doing is creating the JSON, sorry not JSON, the YAML definition in the file system and then generating the go code for this as well. Next, create the controller. So, we are watching pod requests with our controller. That one's super fast. I like that one. And now, let's edit some code. See which window it pops up in. All right. So, I've got two files open right now. The first one here is the controller. And I'm not going to walk through all this. I'm just going to show a few relevant pieces. In this add function, the first piece I want to show is the watching. Remember, I haven't changed anything. This is just what the SDK provides out of the box. I told it to create a controller for the pod request resource and it did. And it's watching the pod request resource for me. So, anytime there's an event, any event for pod request, it's going to hit what's called the reconcile loop, which is the next function I'll take a look at. Just keep that in mind. The next thing for pod requests, events cause the reconciliation to happen. Trying to move from desired state and reconcile to the running state. The next is the pod resource. So, we're watching also for pods. But it doesn't enqueue a pod event. It looks for who owns this pod, who created this pod. If it is owned by this controller, it's a reconciliation loop for the referenced pod request. So, pod event causes reconciliation for the pod request. So, we're always dealing with pod requests. We don't care about the pods. As an event, we care about the pod request, which is what we told our controller to handle. Now, just going down a bit. So, our reconcile loop. This is the part. In Lisa's diagrams, we had a controller block and loop. So, this is a pod request. So, this is a pod request for events. It periodically wakes up as well. If there hasn't been an event, I'll be like, oh, should I have known about something that I missed something? And then it'll run through this. It looks for the pod request instance. If it doesn't exist, it bails. But assuming that we do have a pod request custom resource in memory, it's not been saved in the cluster yet in memory. It sets a reference on that pod to the pod request. So, this is how we get that relationship between the pod and our custom resource for enqueuing the pod request events when pods change. And then we look for the pod. Does it exist? If not, we go and create the pod. And then, if it already existed, we just skip, log that it, hey, you know, it already existed. Nothing to do here. Moving on. Oh, where did my cursor go? There we go. The last bit is the creation of the pod. So, we have a pod request and we simply define within Go the pod that we want to have in our cluster. We have the metadata. So, we have the name. We have the namespace. It sets some labels by default. And then we have the specification for the pod. What container are we going to run? This one's hard-coded. This is what came with the operator SDK out of box. It gives us a name of busybox, an image of busybox, and the command is to sleep for 3,600 seconds. We want to change that. So, we're going to go look at our definition. This is the struct in Go. I'm not going to worry too much about that. There's a lot of good information online. There's a lot of good documentation links in the generated code. I'll get you where you need to be to make modifications here. Just quickly high-level. We've got three main chunks here that we talk about. The metadata, which is where you define the name of your resource, where you want to deploy it, what namespace labels, those kinds of standard things resources, and then you have a spec and a status. We're going to change the spec because we want to change what our desired state is and define some additional fields. Status is really powerful when you're writing your own custom resources and controllers, because you can reflect state about the custom resource back to whatever might be interested in that through the status. You could say it's provisioning something or it failed to create a PV or whatever. You can capture information about what's going on about that resource in status. We're going to ignore it today. We're going to go into the spec. You see they have lots of helpful comments edit this file, insert additional spec fields. We're going to insert some additional spec fields here. As I said earlier, we want three things. We want a name, which is a string. I have to tell it in JSON what do we want to call that. Oh, not my name. That's muscle memory. Never done that running through this. We wanted an image, which is also string call that image in the JSON or YAML command which is an array. Missed it. Save that. We have now customized our custom resource definition. It's that easy. You're done. Almost. Well, yes. Almost done. We're not using it yet, so it's not actually useful. We're going to come back to our controller and tell the controller to do something with that data. In here, we want to use the name, image, and command that comes in on our custom resource pod request, comes in the spec. We're going to use the name field, spec, image, CR spec, command, and then I'm going to make a change here because this annoys me to know when we know this is a pod. I don't need the name to say that it is a pod. I'm going to take it out. It's redundant. Save that, and I didn't flub anything, so it's good. And we're going to go build it. So we're done. We've now customized our CRD, our custom resource definition. We've customized our controller to use it. And we're ready to go build it and deploy it. So next step, SDK build. Ready to say go. So this is taking the go that we have modified. It is generating an image for us locally and it's ready. And I'm now going to push this out to Quay. Sort of. I don't trust Wi-Fi here, so I'm actually going to force it. Wow, it actually finished. Hardware. Hardware. I have a timeout after 15 seconds, so yep. And I actually assuming things wouldn't work so well, I'm going to use a tag called demo. It's exactly the same code that I just presented. It is just kind of a failsafe. We're going to now deploy this into a running cluster. So I'm actually using an OpenShift dedicated cluster. It's running OpenShift version 4 to 13. I think we're on now. Yeah, I don't know. The latest and greatest. And we're going to apply our operator and the custom resource. So what we should see is in the bottom right, we're going to see the pod for our operator come online as well as see that they'll actually recognize the pod request resource. And then once the pod is online, we'll see some logs for the operator spinning up. And should be fairly fast. I think time is like 12 seconds. Let's see if I'm right. Is the operator. We've got the pod. Yep. Container creating. We've got some logs. It's running. We're ready to go. We see no resources found, meaning that our custom resource has been defined. As you can see in the top now, we've got the YAML for one of our custom resources that we want to deploy. It's a pod request. We're going to we're keeping the same as the defaults. We're naming the pod. We're going to create the pod. We're going to do something different instead of sleeping. We're just going to have it loop indefinitely. And it's going to echo hello. Every five seconds. So most of the defaults were pretty good. Well, you could have changed it. Could use like the hello world. We choose not to. We could have a we choose not to. And so creating the pod request, we can see it's been created. So we've got our pod called busybox already spinning up. Because it's not a loop that this controller is actually doing. It's waiting for events to be generated and being sent to the controller. It's pretty amazing. And the custom resource is up there too. Yep. We've got our custom resource. We can see in the logs. We've got a request to reconcile that pod request. It realized, hey, I don't have this pod called busybox that they care about. And then we've got a couple other events happening as state changes within the resources in the cluster. As the pod moved from container to container creating to running, all of those are events that bubble up into our controller as pod request events. So is the pod doing what we want it to do? Yes. A lot of, so the logs are showing a lot of hello DEFCOM CC 2020. So it's working. And now I'm going to kill the pod. I'm killing busybox. And it's terminating. We saw an event in the bottom. Right? I'm going to give us a little white space so we can check it out. So we've got an event for the pod entering a terminating state. It's just coincidence. Yeah. One of the really cool things about this, the only code that we wrote around the functionality that we're going to show right here is the reference from the pod to the owning pod request. That is the only thing that we have to set. And because of that relationship, when we get the event for the... Are you creating a new pod? No, you just got lucky. I don't know. It says terminating still. That is weird. Is it updating? Okay, creating new pod. Okay, good. Because of the relationship, we got an event for the pod request. The pod request went and the pod in memory and then searched for it and found, hey, my pod isn't there. I expect this to be here. My desired state is not my running state. I'm going to change that. And it created the pod busybox with one line of code by just creating that relationship. Another benefit is we are going to delete our custom resource that we created. And because of that relationship, it's going to also delete the related pod. So it works both ways. So our custom resource is gone. We got some events for the pod request. And we can see our pod is already terminating from the bottom right. It is gone. And we are done with the demo portion. It actually worked. Not bad. Make sure I click on the right thing here so the clicker thinger works. If I can find how to move back over. All right. So that's great. We have an operator. One other thing I want to call out is how you can distribute these operators. What we did is the first thing you see on the side here is just the raw manifest. Cube, cuddle, create, apply, replace whatever state you are in the life cycle of your resource. You can deploy them straight from GitHub. You can also use operator life cycle manager, OLM, which is what we use within our team for deploying our operators. It's a very powerful operator itself for managing the life cycle of operators. So the deployment, upgrades and deletion. It's really great. Operatorhub.io. Great resource for deploying or making your Kubernetes operators available. Generally it's part of the OpenShift platform as well. It's also good for discovering operators. Go out there and see what are out there. See what you might want to pick up and why write it when it already exists for you. And then a plethora of other things that I'm not going to try to enumerate here, but there's a lot of other things here. And I think I have that point. Back to Lisa, thank you. Alright, let's bring it on home because it's almost time to work around. So like the human counterparts, operators are here to do work. We think that the operator pattern is a really good way to abstract all of the work that we do a lot and let the API help us do with computers. So that humans do creative problems. We model well, we try to model all of our work into custom resources and then write controllers to handle changes in events with them. And because we're able to do that effectively, a lot of our work is now automated in a way that scales is redundant and it's available. So go ahead and start writing your own operators. You don't need to be limited to the operations types of things that we showed. You can do whatever you want. The sky is the limit. It is really impossible to enumerate all of them because everyone's use case is different and that's the beauty of operators is that you can tailor it to your specific needs. And with that ten points and if there's any questions we have time. Yes. These operators look really cool. I'm a Kubernetes newbie but it looks like it saves so much work. I would like to ask how do operators work with GitOps. I really like the appeal of GitOps that Git is the single source of proof and all the state of my cluster or maybe even clusters is described in a single repository. But it seems like this is in conflict with operators that automatically do stuff in the background and change things how they see. So is it a conflict? How does this play together? We use the GitOps model in our team. Everything we do is in Git not necessarily public Git repositories but we drive all of our work through Git. So I think they can work in concert. One of the things to remember is these operators are deployed as resources in the cluster themselves using the standard resources that the platform provides. And once you've deployed them once you've also deployed your custom resource definitions again you're back to just resources in the cluster. So the processes that GitOps flows for how you manage things are a perfect fit for both the deployment and the visualization of your operators. It's a good compliment. And if I may, what is the tool you use for GitOps? Is it ArgoCity or Flux or something different? We use something... Did you sit on the keynote on Friday? Friday morning? Okay. That is... So part of the keynote was Day in the Life or... Day in the Life of SRE. As I mentioned I got paged by the way. So we manage the platform and on top of us is another layer called AppSRE, the application SRE that is deployed on top of it. The AppSRE team for us manages the workflow to get from Git into your production clusters into staging. So they have, I think it's based on Jenkins. Yes. I think it's ultimately Jenkins under the hood. That may be changing. Could be changing. But it's mostly Jenkins. But I think anything with JavaScript would work. Yeah, there's all sorts of options. I don't think there's any one tool that would be best. It's what is going to be a good fit for your organization. What do you know or others in your team know and what are you willing to adopt broadly? Yeah. We use a combination of OLM which we mentioned Jenkins and some homemade stuff. We can make files. Yeah. Any other questions? So did the QR code scan in the back? Are we pretty sweet? It did? Woo! Yes? Do you have any good examples for operators that were outside of the cluster? Outside of the cluster. So, operator... The operator runs in the cluster but whatever it mentions is completely out of the cluster. Yes. We have several operators that we run in a management cluster to administer external services like PagerDuty, Deadman, Snitch interacting with Let's Encrypt for generation of certificates. Well, that interacts kind of with the cluster because it makes secrets and stuff. Yeah, and those ultimately... We've segmented those out from our OpenScript dedicated clusters because we don't want that management credentials in a cluster that our customers can access and they get pushed in using a tool called Hive. That is... I don't know where that is from a product point of view but it's OpenScript at Oregon GitHub from Hive project. Okay. I have a question regarding micro-services. So, I don't know much about it. So, well, I was in a talk like I took for detail and it helps you to define your business process for your Java applications and convert it into micro-services. So, since we are talking about operators are micro-services just curious to see if here's something that can help you to design the business process I don't know the whole architecture of micro-services. Business process definition is definitely a whole different domain of problems. One thing I didn't mention operators are hypothesis theory assertion operators are micro-services but it doesn't mean that micro-services are operators. I don't think that everything could be defined as an operator but that's probably a domain in which it's much more difficult to fit an operator into a model. And we are out of time. Thank you everybody.