 Hello, hello, everyone. Yeah, just welcome you again. Like for the next session, we are having by Timothy Apnel over the automating the management of the Kubernetes apps. And I think we are running out 10 minutes late. But anyways, yeah, we had a quite interesting session on the blockchain by Prajit in the previous session. So let's get connected with automating the management of Kubernetes apps. Hello, I'm Tim Apnel, and I'm a product manager and evangelist on the Ansible team at Red Hat. I've been involved with the Ansible project since its beginning days as a contributor user Ansible consultant and now product manager. So I've been around Ansible on automation a lot and have seen it from many angles. In the past couple of years, I've been looking at how Ansible could be useful when applied to automating cloud native systems and more specifically Kubernetes. I often get puzzled looks when I suggest Ansible to automate something with the Kubernetes cluster because Ansible is not a cloud native tool that they've considered. To them Ansible is a tool for doing configuration management orchestration and DevOps on traditional IT systems like on-premise bare metal servers, network infrastructure, and maybe even public cloud services. This is a misconception though because Ansible fits naturally into a Kubernetes environment. There are many similarities to how Kubernetes and Ansible approach their individual problem domains that make a natural fit when we bring the two together. They're both highly active and highly used open source projects and have vibrant communities working to solve common problems. They make hard things easier through automation and orchestration. They also both work as desired state engines and make extensive use of YAML. As we can see in the two examples, both use similar patterns and YAML to describe the desired state of the world. On the left you have a Kubernetes config map object definition expressed in YAML that you could feed into your cluster using cube control. On the right is a single Ansible task doing the exact same thing. Their syntax is almost identical. What's different is a small but significant thing. Templating parameters like the color variable here using Ansible's built-in gingetoo templating function. Templates is not something cube control has traditionally supported, but that has begun changing recently with the customized plugin. What is the most powerful part of using Ansible is how easy it is to interact with Kubernetes, whether you are developing an operator or automating something else with your cluster. With the Ansible Cates module and other related modules that are part of the community dot Kubernetes collection, an Ansible user can manage applications on Kubernetes, on existing IT, or across both with one simple language. In the last example, we saw an Ansible task with an inline config map definition that had a parameter templatize. Ansible lets you take that one step further and maintain the entire definition as a separate gingetoo template file like in this example. In an upcoming release of the Cates module, you'll be able to do this more concisely without the lookup function by just specifying the path to the template file. Every shell command and UI interaction is an opportunity to automate. This theorem of mine is applicable to any automation of infrastructure and systems. It is not just cloud native systems though. I thought it was important to highlight this notion because seizing the opportunity to automate is an underlying theme to the uses of Ansible with Kubernetes that I will cover here. There are many benefits to automating and many tools you could use to automate Kubernetes systems. So why Ansible to automate your operations with Kubernetes? Ansible is ridiculously simple to learn. New users can learn the required structure in hours, not days or weeks. If you're already developing Ansible content, your skills are transferable and you are most of the way there to developing something for Kubernetes. The same tried and true Ansible tooling lets you automate and orchestrate your applications across both new and existing platforms, allowing teams to transition without having to learn new skills. With the case module, an Ansible user can manage applications on Kubernetes on existing IT or both with that one simple language. No programming required, faster iterations, and easier maintenance in getting things done. So for the rest of the presentation, I'm going to go a bit deeper into these primary but by no means only ways you can apply Ansible to automate your Kubernetes based systems. The first use of Ansible in automating Kubernetes environments is what we refer to as last mile automation for Kubernetes cluster setups. There's a lot more to making Kubernetes useful than just installing it and getting it to start up. There's additional setup and configuration of resources on the cluster and infrastructure off the cluster, you will still need to perform to have something useful for your specific needs. There are things like network configuration firewalls DNS, setting up your registry or setting up monitoring logging and other extensions. Enterprise Kubernetes offerings like Red Hat's OpenShift platform have installer systems that help you with some of this, but they will only get you so far. All infrastructure can be hosted and replaced by a Kubernetes cluster. I alluded to this in my last slide that in setting up and managing a Kubernetes cluster, you'll need to integrate and connect with resources and infrastructure, not running in your Kubernetes cluster, and that are not cloud native. Many organizations have made significant and historical investments in traditional IT and infrastructure skills and workflows. I would wager many of you listening are in those organizations. There's a lot of potential combinations and needs to what this specifically means to you. So Ansible can be the glue for bringing the traditional off cluster and the cloud native on cluster infrastructure together. It can provide the functionality needed to coordinate and manage these hybrid deployments in as simple and effective way as possible. Another potential use of automating Kubernetes environments with Ansible is creating repeatable Kubernetes application and service deployments. This is a bit similar and somewhat overlapping with last mile automation in the first use case we just covered, but here we're focused on the applications and the services rather than on the cluster itself. So something like this look familiar. You want to deploy an application to your cluster and you are presented with a quick start guide that features a dozen or more cube control helm OC and said commands along with some instructions to make manual edits to example files. It's like the equivalent of the sys admins run book from the 1990s. Ansible can help because customers deploying cloud native applications and services will never have just one cluster. At a minimum they will have dev test and production clusters. And if the scope of their offerings are of any size or sophistication, they're likely to have multiple functional and geographically dispersed clusters across data centers and cloud providers. So defining Ansible's variables and inventory management templates and playbooks, it can be used to create repeatable and consistent Kubernetes applications and service deployments, just like traditional infrastructure. Standard procedures and settings can be shared while others can be modified according to the cluster being targeted, all from one central control plane. This can get you pretty far in a way that is simple and lightweight. If you are managing a large estate of clusters and applications, there are powerful tools emerging, you'll want to explore like red hats advanced cluster management for Kubernetes offering. Another use of Ansible to automate Kubernetes environments is the encoding of human operational knowledge in the form of operators. This is something we're going to dive a little bit deeper into than the previous uses. For those of you familiar with Ansible, it's something a bit different and worth the exploration. I also believe operators in general have a lot of great potential to help organizations do powerful and useful things with their Kubernetes clusters. So operators automate and simplify the management of complex applications on Kubernetes. This is an application-specific controller that extends the Kubernetes API to create, configure, and manage instances of complex, staple applications on behalf of a Kubernetes user. They build upon the basic Kubernetes resources and controller concepts, but include domain or application-specific knowledge to automate common tasks. To put, operators enable you to program Kubernetes with the smarts it needs to effectively manage your applications or services for you. The Operator SDK is something that was originally developed by CoreOS, championed by Red Hat, and is now an incubator project of the CNCF. So when it comes down to it, operators are just a pattern and a type of controller application running in your Kubernetes cluster that the Operator SDK helps you implement. An operator is watching for events that happen inside the cluster, and it is responding to those events. It is always looking at what has been specified as the desired state of the world through the endpoint in the Cates API, defined by a custom resource or CRD, and then makes changes to the cluster to bring it closer to what is specified. This process of making changes to aligned states is called reconciliation. As mentioned, an operator is a special type of controller that is purpose-built to deploy and automate the management of a Kubernetes application. Operators work in a relatively modern release of Kubernetes and do not require you to install some additional extensions or use a specific distribution such as OpenShift. Whatever kinds of workloads you need to deploy into Kubernetes can be automated with this operator pattern. This pattern is designed to capture that human operational knowledge that I talked about in code so that anything that an administrator or a system engineer would need to perform in order to maintain an application or service and keep it running indefinitely on a cluster is automated. The Ansible operator SDK is a collection of building blocks that make it easier to develop and manage Kubernetes apps in both a Kubernetes and Ansible native way. It is a first-class citizen of the operator SDK. It's not something that you need to add. Ansible is one of the available built-in types of operators that the operator SDK is able to generate and build. Besides Ansible, the operator SDK can also assist with developing operators using Go and Helm. Ansible-based operators, unlike Helm-based ones and just like Go-based operators, can be used to manage the complete lifecycle of container-based applications running in Kubernetes. Ansible enables full-featured operators to be developed more easily. Arguably, you are not giving up anything using Ansible, and in some respects, you gain a lot of efficiencies. With Go, you do get that fine-grained control, but there are quite a few advanced and complex concepts involved in doing so. There's also a lot of generic boilerplate functions every operator needs to implement and manage in Go that has been embedded in the Ansible operator SDK. Let's take a look at that now. Inside of the operator, we have the operator SDK binary. This is a pre-built generic operator written in Go that will run Ansible for you based on how it's configured in the watches.yaml file. That file is a mapping between a Kubernetes resource and your Ansible content. The Ansible kates module will be used to create resources in the Kubernetes cluster. As a developer creating a Kubernetes operator with Ansible, you're only responsible for providing the watches file and the Ansible content that manages your application. The operator SDK binary handles all the low-level operator functions and details for you. Using Go is a very powerful way to write Kubernetes operators, but there are quite a few advanced and complex concepts in doing so. There's very advanced and powerful caching and queue management that are built into the Go client and expertise to get proficient with it. If you don't have that type of time to build up the expertise or you don't want to take that time to work through that right now, you can leverage what comes as part of the generic operator binary in the Ansible operator SDK so you can focus on writing your reconciliation logic with Ansible. So let's take a look at what's in an Ansible-enabled operator image. In the white box is what you provide, which is pretty minimal. Watches.yaml file that we just talked about and at least one Ansible role or playbook. What's in the gray box is functionality and resources provided to you as part of using the Ansible operator SDK and its base image. It includes the operator SDK binary in addition to Ansible itself, Ansible Runner, and Python plus any dependent libraries. So remember when developing an operator using Ansible, you only need to work out and provide what is in that white box. The Ansible operator SDK tooling provides the rest. Developing your first operator with Ansible is quite straightforward using the SDK. First, you initialize the operator project with Ansible using the init sub command and a few parameters. The important part here is the plugins equals Ansible in red. Second, you develop your Ansible content to automate the lifecycle of your application or service. The SDK even gives you a means of testing your operator locally using the molecule test framework. Next, you map a Kubernetes resource in the Watches file to your Ansible content. Once you're ready, you can run the build command and the operator SDK will build a container image with everything we reviewed in the previous slide. With that, you have a container image ready to be deployed to a Kubernetes cluster or published to an image registry. We don't have time to do a demo of an operator in action here, but we can quickly walk through an example use of one. Say we've developed an operator using Ansible that will deploy and manage a scalable caching service made up of memcache router nodes, McRouter, that is managing memcache pods in a pool. The operator has the know how to set up and maintain the configuration of McRouter, more specifically the number of memcache nodes in the pool and their addresses so it can work with them. So as the pool scales down, or it needs to scale up in running your service, the operator of the operation of McRouter will be adapted accordingly, instantly and without human intervention. Remember, adjusting pool size happens with a single API call to the caching service CR. The value here is that you can manage your application lifecycle with an operator and developers and admins can easily manage one or many instances of your application stack simply by using it. Operators enable a public cloud like experience we see using services from cloud providers such as AWS or Microsoft Azure. Okay, moving on. Another use of Ansible automation is in creating continuous deployment or specifically GitOps workflows. GitOps works by using Git as a single source of truth for declarative infrastructure and applications. Often you read that immutable infrastructures and even more specifically Kubernetes cluster management are asked attributes of GitOps. I believe that this description is a bit too prescriptive and limiting. So here I treat them as suggestions and preferences, not requirements of GitOps. We'll see why in a little bit here. So this is what a typical GitOps workflow looks like from a conceptual level, automating delivery pipelines to roll out changes to your infrastructure. When changes are made to get engineers developing software using their standard workflow and continuous integration practices as usual. When a release is ready, it is pushed to a registry continuous integration is kept separate from continuous deployment mechanisms, which is a best practice that GitOps recommends and incorporates. So to roll out the new release of changes made in the Git repo to state that the new release should be made and used in a cluster. The GitOps agent that is running on the cluster pulls in the configuration from Git. Typically, these agents are Kubernetes operators such as Flux from Weaveworks. The GitOps operator works to align the current state of the cluster with the desired state of the Git repository. Now let's look at what happens when we apply Ansible to a GitOps workflow. What we get is something that is more flexible and able to do more. Here Ansible replaces the GitOps operator that is running on a cluster and pulls in its state configuration from Git. Ansible can work with operators running on a Kubernetes cluster for a sort of push-pull sort of approach. Tower pushes the configuration to the operator via a CRD endpoint, and then the operator pulls in any container images from the registry that it needs. The GitOps operator made for a specific case application or service rather than one of all the configurations that manages the life cycle on the cluster. What is more important to note is that this also provides the flexibility to apply GitOps workflow principles to systems other than Kubernetes, such as public private cloud services and network infrastructure. You're not required to use a GitOps operator as an agent on that infrastructure. You also have the flexibility and the freedom to use the best tools for your needs and tailor your pipeline to how you want to work. A strength of Ansible is that it excels at IT automation glue. Covered here are some of the most common ways Ansible can help automate the management of Kubernetes deployments, but they're not the only way. Remember, every shell command and UI interaction is an opportunity to automate and Kubernetes is not any different. If you're interested in learning more and digging into this topic deeper, here are some starting points that I recommend. Later in October, we'll be hosting Ansible Fest, our annual community and user festival. The Ansible community will be talking about all things Ansible, including Kubernetes, GitOps and a whole lot more. This year will be virtual and is free for all who want to attend. So just go to the ansible.com slash Ansible Fest link to register to attend yourself. Thank you all for listening and I hope you found this information useful. Happy automating. Hello, everyone. So thank you. We had words. So if anyone has any questions, I am. I'm here in live now. Please put your questions in chat if you have any. I guess I stunned everyone into silence. Oh, hey, Dave Duncan. Hello. Thank you. Yeah, I'll be over over in the room. Oh wait, there we go. There we go. So all right, so first question. So Jason, yes, you can mix Nate's automation with regular stuff. I actually am doing a talk at Ansible Fest for GitOps. And in it, I actually show a mix of things happening in there. So you could do something like deploy an application into your Kubernetes cluster. And maybe that needs to use an external database. So you go and set up. Oh, Dave Duncan was just on RDS database as part of it or, or make a schema change to that database as part of your deployment rollout. So in that case, you're using Ansible to both work with your Kubernetes cluster and the application on there and also work with a public cloud provider system. So what Ansible can do, you can coordinate. And that includes Kubernetes that includes networking infrastructure cloud. You name it. All right. Let's see Hermann Terraform and Ansible for Kate's operators. Yes, Ansible for Kate's operators. There's not really anything for Terraform and that that would actually be a little hard because Terraform is pretty much limited to cloud provisioning only. So I'm not sure how you would really utilize Terraform in an operator. But there is, as I mentioned in my talk, a Kubernetes operator SDK. And one of the options that are built in when you downloaded and use it, and this is part of the CNCF now is an Ansible one. So you initialize your project and it gives you all the Ansible structure you need. You create a task or two, whatever you need to automate and set up the watch's file and you can build a container and deploy it anywhere that, you know, or any Kubernetes cluster. All right. Any other questions out there? Yes. Sorry, I'm just looking at Jason, your last comment. Yeah, that's one of the things I try to stress and I've seen this a lot in cloud is when you mentioned in your comment about a ton of commands to set up something. That's what I mean by last mile automation. And we've seen this a lot. First and foremost, an Ansible automation guy that's come to learn Kubernetes and appreciate Kubernetes. And one of the things that still kind of Well, I'll say stuns me is when I sit down to use something and I'm given like pages and pages of of cube control and OC commands and Helm commands and all commands and said commands and just to get up and running and I just look at I go why didn't they write an Ansible playbook for this. This would have made things so much easier. And that's that's something that we want to get across and that's what that theorem is that I put out there which is every shell command every UI interaction you do is an opportunity to automate. And so if you if you're seeing tons of commands to get something stood up. That's a prime candidate to automate and I would suggest looking at Ansible for that. Yeah, we're seeing it is sort of a case thing in that it's almost like I don't want to offend anyone but it's almost like the cloud native community forgot about traditional tooling. So I'm here to kind of remind people that hey, you know, there's still applicability to a lot of this stuff that's come before cloud native and that's not to take anything away from cloud native. It's just to say, you know, let's let's put the best thing out there wherever that comes from any other questions. Awesome. Thank you so much Tim. If you want to continue the conversation you can I'm going to bounce over to the room for everyone else and shell. Actually, I work with Jeff. Jeff is right now we've contracted him to do some work with the Ansible team. So I just was talking to him earlier today. So I want to move over to the other and so other rooms so you can continue. Thank you everyone. Thanks for listening.