 Alright. Hello, thank you all for coming. Welcome to this talk. My name is Michael Rivnack. I've been in Red Hat for almost seven years. This is my second time at Fozdem and love this conference. Super happy to be back. I worked for a long time on the pulp team, which we actually heard a little bit about earlier, early this afternoon. Got involved in the container tooling at Red Hat from the very, very early days when Docker started to become a thing there. And that sort of snowballed, got involved in container orchestration. And now here I am working on using Ansible in particular to automate the workload side of Kubernetes. So let's dig into that. So what is Kubernetes? Another, like the flip side of this question for this crowd is even like, what is the role of config management in a world that has Kubernetes? This is the experience of interacting with Kubernetes. Kubernetes, I'm assuming most of you have some idea of what it is, but in the shortest recap, it's a system that takes some group of machines and turns them into a cluster and enables you to schedule containerized workloads into that cluster and a bunch of extra stuff around that. But it's declarative and we generally interact with it using YAML. So here we have the most basic example of I have a container image called company name slash example. It's feeling creative that day. And I want to run that container in my cluster and on the right we have a service. This is a Kubernetes primitive that gives us a network presence for a running container or a set of running containers that are all one and the same. So what's worth pointing out about this is one, we're interacting with Kubernetes by writing YAML either by hand or using some kind of tooling, maybe something like Helm is something familiar to many of you. We're creating YAML and stuffing this YAML into the Kubernetes API and it's declarative. And what that means is in this case, we stuff it in the API and we stand back and we allow the cluster to now do whatever it thinks is necessary to make what we've asked for true. So in this case, the cluster would see, there's a pod and it is requesting this container to be running. Let me go look at the world, I don't see that one running. So now I'm going to start it and do whatever else I need to do to make that sort of thing happen. So now let's talk about Ansible and how that intersects with Ansible and how that actually makes Ansible a really natural fit. So Ansible is a Kubernetes module and it's really a wonderful module. If you have ever used Ansible pre-2.6 to interact with Kubernetes, I strongly encourage and invite you to take another look at this new K8S module. It is light years better than what we had before. It's simple, it's elegant and it doesn't get in your way. And here's the example to prove it. On the left, we have a resource called a config map. It doesn't really matter what the resource does or why it exists. But just trust me, it's a Kubernetes resource that you can create just by writing this little bit of YAML and pushing that into the API. On the right, we're creating the same thing. And maybe to take one step back, the experience we might have with Kubernetes is run the command line tool kubectl. So I could run kubectl create dash f to give it a file path and then give it a path to this little YAML file and it would create this thing for me. Instead, we could create this simple ansible task. And you see the red part is exactly the same as the part on the left. The only difference being that I've taken the liberty of templatizing it because now we have all of Ansible's template ability at our disposal. So just right here, we can already see that the K8S module can be a very nice gateway to having a powerful and rich templating experience when interacting with Kubernetes. Now if you don't want to inline your Kubernetes manifest in quite that way, here's another option. You can create manifest files which is how people normally store these manifests and just put them in the template director of your role. You can access them like this. So even for somebody who's never used Ansible before but might be interested in learning it or maybe is just looking for a good way to manage what's going on in their Kubernetes cluster, this makes it very, very accessible. Even if they just trust somebody that take these four lines of text and then put your Kubernetes manifest in that file in your template directory, they can really get a lot done and go a long way. So this is the guts of an Ansible role. Most of you are probably familiar with this, I imagine. But here I want to highlight that a role is for packaging related Ansible code together, of course. Our goal in using Ansible to interactive Kubernetes is to create a single role that knows how to deploy a single application. So maybe we take an application like WordPress or MediaWiki or something like that. We would make a role that knows how to interact with Kubernetes to deploy that application and then maybe even interact with that application itself after it's running inside the cluster. And once you buy into that idea of we're going to make a role that does that, then we're going to do some extra very interesting things toward the end of this talk to enable self-service provisioning and reconciliation and continuous management in that sort of way. And then here in yellow, I've just highlighted the two things that a newbie to Ansible needs to know about and worry about in their brand new role. In our case, we have some tooling that we're going to look at in a minute that scaffolds this all out for you, including some other pieces. But even if you just use the Ansible Galaxy tool to create a brand new role, you get this directory structure for free. And all you have to do is worry about the templates directory. You can put some templates in there and then reference them from your main.yaml file. So it's really very simple. All right, so why use Ansible with Kubernetes? So we have this similar pattern of it's not just that it's Ansible and YAML and Kubernetes and YAML. It's also that we talk about Ansible as being item potent. You want to be able to rerun the same Ansible role or playbook over and over and get the same results at the end. Likewise, in Kubernetes, we have controllers that want to be able to run a reconcile function over and over again and always end up at the same state at the end. They're always at least moving toward the same end state. So it's a very natural and similar patterns to bring together. A lot of people are already familiar with Ansible. I'm betting this room is no stranger to Ansible. Even if you're not, it's really easy to learn. Gingotemplating is something a lot of people are familiar with, even outside of Ansible. And then lastly, for these reasons, we get actually quite rich day two management out of Ansible. It's much more than just a templating engine. You can use it after you've deployed your application to do advanced things, like backing it up, restoring it, upgrading from one version to the next. Whatever detailed steps or careful work might need to be done to facilitate an upgrade in some cases. You can repair things when they're broken. And you can scale things in custom ways. You can identify your own metrics. Maybe it's a queue depth somewhere. Somebody is telling me about their use case a few weeks ago where they're trying to measure a queue depth of a microservice that's multiple services upstream from what they're actually trying to scale out. So they're trying to basically get advanced warning when there's a flood of work coming in upstream somewhere and scale out the guts on the bottom so that they're prepared when the flood arrives. So all that is stuff that you can do using Ansible. All right, so now that we've bought into this idea, we've got an Ansible role, and this Ansible role can deploy and do some things that can manage an application in Kubernetes. We have some extra tooling available, two different ways, in particular we're going to look at right now, of taking that role and doing more with it. The first pattern we're looking at is Ansible Playbook bundles. If you think about provisioning an application just in general, forget about Ansible, forget about Kubernetes, however you're going to do it, whatever tooling you're going to use, these are the kind of things you need to have in front of you when you're going to do that. You need just the Kubernetes manifest files. You need to know about any external services you're going to access and how to access them. Maybe you have some config data specific to this instance you're provisioning. Maybe you have some seed data you need to get in there. Maybe you are actually restoring from a backup. You need some runtime tooling. What technology do we know about that we could use to package all of these things, or at least most of these things up into one place and move them around in an immutable form that's testable and all that stuff? Of course it's a container. Packaging, by the way, I think is the underrated side of containers. The fact that it's a process running in isolation is interesting, but the packaging aspect of shipping these images around is in many ways I think the more powerful side of it. So Ansible Playbook Bundles are really just a pattern of taking all this stuff using Ansible and putting it into a container that can be run in a particular way with a very simple interface that we've defined. So this Ansible Playbook Bundle, it runs to completion as a pod in your Kubernetes cluster. If that sounded foreign, it is a container that you will run in your Kubernetes cluster. You'll start it. You'll let it do whatever work it's going to do, and then it stops and it exits and you clean up anything that's left of it, and it's like an installer. It's very similar to just having an installer that you can run in your cluster and out pops this application. And the nice thing about being containers, it's testable, it's reproducible. You can put it through a full CI pipeline. But what else can we do with this Ansible Playbook Bundle? There is this idea of the Kubernetes service catalog, which is similar to other services like Amazon Web Services, has their catalog of services in their cloud. You and your Kubernetes cluster, you can have your own catalog of your services available in the Kubernetes service catalog. And Ansible Playbook Bundles are perhaps the easiest way to get one of your services exposed and available inside that service catalog. This is an example of the OpenShift user interface. It's just basically the nicest user interface there is with the Kubernetes service catalog. Come on, let's find some seats. It's what you'd expect out of this kind of experience. You point and click on one of these things. Maybe you select MariaDB. It asks you some questions. You fill in the answers. It's just like an installer. And at the end, you hit Go, and some work happens in the background, and that's it. Now you have a thing provisioned. Well, how does that work? In a nutshell, on the right here, you see these brokers. And each broker can advertise one or more services to a cluster and say, hey, I know how to deploy MariaDB, or I know how to deploy MediaWiki, or I know how to deploy Prometheus, or whatever it may be. You can provision these things, deprovision them, and do other actions. The point of this is that this enables self-service provisioning. So users of your cluster, perhaps you have dev teams, perhaps you have QE. When they need a database, they can go and deploy point in a very simple point and click kind of fashion, whatever services you've made available in that cluster on their own without needing to bother anybody else. All right. So it's the last piece of this service catalog story. Chapter five here is the Automation Broker. So we just saw we have this series of brokers here that can plug into a cluster and advertise services for provisioning. The Automation Broker, we thought maybe we could do something a little simpler. We created one broker that uses APBs, those Ansible Playbook bundles, as services that it advertises to the cluster. So each APB that you make becomes available for provisioning. And in fact, even in that screenshot we saw earlier of the OpenShift service catalog user interface, some of those icons were being powered by the Automation Broker. So you would click through and ultimately what's happening is the user input the user provides in the wizard through the service catalog gets passed into Ansible at runtime and is then available for use in your templates or however else as facts as Ansible is running. And then you can do whatever work you need to do. The broker takes care of running it in a secure name space that's transient. So at the end of a provision it throws away that name space and cleans up after itself. So the end of this story is it removes the need for you to make your own broker for sure, but it also takes advantage of Ansible and the KADS module, makes it very, very easy to make your own services available for provisioning inside of a Kubernetes cluster. Don't squint too hard at this. This is an example of the user experience. Just trust me that there's a command line interface you can use to interact with the service catalog. It's not the best ideal experience but if you need to do it from the command line it's hard to do that from the command line and this does a nice job of making the best of it. So that's available. Qbaps is another option that came out of Bitnami that you can use that will run on it just about any Kubernetes cluster I think. And then this is the OpenShift user experience. It's a very nice user experience. I do work on OpenShift but I'm not going to lie it's very nice. So if you're running OpenShift or if you'd like to run OpenShift you'll get a first class service catalog experience. It's quite a nice story. So what's the status of this? So the service catalog side it's a great path for self-service provisioning. Works today. It's a mature ecosystem. You can just go out and do it. The best use case for this is off cluster service integration. So say perhaps you have some appliance. You've got an off cluster database or some other thing like that that you want to interact with or maybe you're running a cloud like if you're Amazon. In fact Amazon themselves did use this very automation broker. They wrote APBs. They wrote Ansible Playbook bundles to make a broker that ran inside of OpenShift when OpenShift was inside the Amazon cloud and exposed their services to their customers. This lacks day two management. We're going to see how to get day two management in just a minute. But that's really the biggest detractor. It'll deploy something. So if you use this pattern to deploy an application that's running in your cluster, it deploys it. Great. Does a fantastic job of that. But then there's nothing else watching it. It's up to you now to own the pieces, manage it, monitor it, repair it, upgrade it, whatever. That's mostly on you in this pattern unless of course you're interacting with an off cluster service in which case you probably have some other systems in place to deal with all of that stuff. So this thing called operators is really going to take over as the preferred solution and that's largely in part because it does have day two management as a first-class concept and that's in fact is really what the core focus of operators is and we're going to dig into that in just a moment. So the bottom line is a service catalog is going to definitely stay around. It's a thriving part of Kubernetes. It's of course going to be part of OpenShift for the long term. But I'm going to show you the operator pattern next as the second option and I think it's probably a better option for most people. Operators. What is an operator? So an operator is just a particular type of Kubernetes controller. A controller is a service that sits around running in your cluster watching for some resource to get created or updated or deleted or something and whenever something happens with a resource that it's interested in it just wakes up and runs a reconcile function and does whatever it thinks is appropriate to move the state of the world closer to what that resource says the state of the world should be. And an operator is nothing more than a controller that is special purpose designed to manage and deploy an application of some kind into your cluster. And beyond that the real highlight of operators is you can use it to encode human operational knowledge into your cluster. So anything that you would do if your pager went off or that you would do when you're doing an upgrade or doing backups or doing restores we all love to automate ourselves out of a job. This is that mentality. Encode what you would normally otherwise have to do as a human typing on a keyboard into your controller so that it knows not only what to do but when to do it and can pretty much manage manage your services for you. So how do we make one of those things and how is that even possible? Well, Kubernetes is interesting. It has a rest-dish API as I'm sure you can imagine. Just think of a long list of endpoints with resources that you've seen even tonight so far a pod and a service and a config map and so on. The interesting thing about Kubernetes is it allows you to add your own custom endpoints to its API. So in this example we've created a Memcache D resource type in Kubernetes. Now, in its list of API endpoints there's a new Memcache D one. Kubernetes gives you this opportunity to name-space that but that's a topic for another night. By starting from an ansible role that we purpose-built to deploy a particular application, say Memcache D and then having now an API to create and update and delete resources that can describe an application like Memcache D you can probably see where this story is going. So this is the pattern of how an operator works. We have this smiling face-up on the top left. They interact with the Kubernetes API. They create their custom resource. In this case it's a Memcache D, let's say. They describe what do I want my Memcache D to look like. A controller in the middle wakes up, sees the event, does whatever it thinks is necessary which ends up being it creates some pods. It creates a service. Maybe creates a persistent volume for some reason. Who knows what else it does but it does all those things and now the application exists and then the controller sits around and just waits for anything else to happen to that resource. And if you change it, then it will go change the real world to reflect whatever changes you made. How does ansible fit into this pattern? Well, we made an ansible operator for you. Now in this case, before ansible is involved you are writing your own controller in Go probably. Some people have strayed off that path and written them in another language or two that I won't mention. But for the most part you're going to be on the hook for writing a software project, writing a controller that does that stuff. Instead, we've done a lot of that work for you. So just like in that broker story we saw a minute ago where we made a generic broker that can just run ansible for you. Here we've made a generic operator that will likewise run ansible for you. So our ansible operator it is written in Go and it's using all of the Kubernetes client tooling that's really nice stuff. Gives you a lot of power in terms of caching and queue management and this sort of stuff. So we've done that and every time that it gets one of these events it wakes up and decides it needs to run a reconcil. All it does is it runs your ansible role or your ansible playbook that can run as many roles as it wants. And we have in the middle we have this mapping file. It's not too important a detail. But basically for any resource type you define you just tell this operator if you see the memcached resource run this role. If you see some other resource run this other role and that's it. So it ends up being a very simple kind of pattern and experience. This is what that file looks like. You can see group version and kind is how we define a resource and then we're just mapping that to a playbook. Pretty simple concept. The operator SDK. That is the project we have that is the tooling that helps you build one of these operators. You could do it and go and we've got some great tools to help you do that. You could do an ansible. I think that's probably the easiest path and certainly the best balance of ease to get started in long term power. You could also do with Helm. If you have existing Helm charts and you want to get into this operator pattern right away and be able to, I guess this bears emphasizing a key benefit of the operator pattern is it makes your application part of the Kubernetes API. So it's now Kubernetes native for whatever that means. In this case really it means it's part of the API. You can provision, upgrade, manage, do everything just natively through the normal Kubernetes API. So if you want to take Helm charts and make them Kubernetes native in that way you can do that with an operator right now. It's a bit more limiting in terms of day two management or it's quite a bit more limited in day two management. But you can get started that way. And then there's the link to the operator SDK. This is what your base image is going to look like. So we provide the green parts. That includes ansible. It includes ansible runner. It includes the operator binary. All you have to provide is the yellow parts on top. So you provide one or more roles. You provide that mapping between them and that's it. And now you've got your own operator. And otherwise it's as easy as just writing ansible to do whatever work it is you need to do. So the bottom line of these two stories. On the one hand we had we can make a service broker and plug into the Kubernetes service catalog by making an ansible role that can deploy your application with Kubernetes. We saw how easy it was just to interact with Kubernetes from ansible. On the other hand we can make an operator which is a different kind of pattern and certainly what seems to be the future of Kubernetes. And both of these are just made really really easy by ansible. I just love working with ansible and Kubernetes. If you would like to put your hands on this stuff and try some labs, please don't do it right at this moment because you will crush this. I've seen it happen. Ask me how recently. You can go here tonight, tomorrow. When you get home next week and there are a number of exercises that you can run for free. No registration. We don't collect any contact info. Nothing. You just go there, you click. You get your own environment with Kubernetes running. It's OpenShift. It happens to be OpenShift. That's Red Hat's distribution of Kubernetes. It's Kubernetes. You get exercises down the left. You get your environment on the right and you can go through it and you can build your own operator using Go. You can build your own operator using Ansible. You can learn how to use the Ansible Kubernetes module and all kinds of other stuff. It's a great way to get to know it. If you want to dig into this more, one, you could go after a beer with me tonight. But otherwise, we can go to Config Management Camp because I'll be there, too, doing a little bit of a longer talk and then we'll probably have some more time there to dig into more detail on this whirlwind of stuff that I know I just threw at you. So with that, do we have any time for questions? Is that a yes? 30 seconds. We have one question. Anybody have a pressing question? Right here, in the middle. How do you deal with different versions of your roles? It's hard. But you deal with it the same way you deal with versions of operators in general. So you're going to have a dichotomy of your application life cycle versus your operator or APB life cycle. So it's kind of up to you to have a project that's going to be your operator and it's going to include one or more Ansible roles and you can just version that the normal way you would version containers that you build other ways. That's it. All right, right of time. Thank you, everyone.