 and welcome to the Cloud Multiplier. I am your co-host, Grony Buchanan, and I'm thankfully joined by my co-host, Joy Deep today. Glad to have you back, hope you're feeling better, and we're joined, well, honored to be joined by our guest, Hao Lu, guest, personal friend, colleague, all of the above today. We didn't have our load in today. We're experiencing some technical difficulties, but we've come to you on time, hopefully, and live, so we are ready to go. Joy Deep, glad to have you back. So I'll put you immediately in the hot seat as we move into, and I'm borrowing, actually borrowing a background today. So there's top of mind topics. It's appropriate because we're using the Ask an OpenShift admin loadout that we've put together. So any top of mind topics today, Joy Deep? Yeah, thank you, glad to be back. I've fit now after a bout of COVID. I can count from five to zero backwards. I can do my exercise. And I had actually the pleasure, for you folks, first time seeing Gertie, a little flustered because somebody or something deleted the show, and he's responsible for it, so he was loading all the stuff in the last 15, 20 minutes. So anyway, the interesting thing that I wanted to share is something called hugging face. Huggingface.co. So I just picked up this chatter from a machine learning expert, and this side is not free. Exactly, there are some things which are free, but it has amazing amount of machine learning models uploaded there by lots of different companies. They are powerful problems that are solved there for people to reuse. I did not have a clue that such kind of thing existed. This was fascinating because I was actually looking for something which I think has been solved already and trying to reuse that. So that was a fantastic surprise for me. That's magnificent. So it's like a package repository. It's something like that. Yeah, something like that, your NPMs, your goodness pip, all of that, PipEye, but for machine learning models. From machine learning, right? That's really neat. Yeah, and the catch there actually, Gurney, how is that, especially for neural nets, you require lots of data to train. If you have got a train model which you think you can reuse or change a bit and use, that's a huge leap forward. Yeah, especially, I've always heard the story of, it's very difficult to compete with Google because you don't have all of the data that Google has produced. And that's a hyper, wild scale problem, but even smaller problems where you don't have the data, and data is expensive, and data is valuable, and even more valuable than that are the models that it produces. So you can pay use of Google service to use a pre-built model. That's amazing that there's a website. Also, I would have never guessed that a website called Huggingface, whose logo is an emoji, would be a repository. I guess I didn't screen share this, but that is amazing. Well, how you're in the hot seat, do you have any fun open source projects? I know fairly recently you moved from the advanced cluster management team over to Ansible. So I assume a lot of your recent discoveries have been Ansible related, right? Just drinking from a fire hose at the moment and learning a lot of our Ansible automation controller. Yeah, besides that, not that many interesting new discovery. Although speaking of AI, right? Like recently I started using co-pilot and I continuously be shocked about how... It's just fascinating. It actually produces useful stuff sometimes. And I used to be very skeptical of AI. I don't think I am anymore. I think my favorite GitHub co-pilot joke is that we've built a machine that copies and paste the code from Stack Overflow for you. You just write the pseudo code and it does the copying and pasting for you. But, and then I think how your joke was that you like writing the pseudo code and it writes the code. I said, well, then just write Python. So there's my joke. The co-pilot Python stuff actually works. Really, really well, really well. Right. And you know, in this other one openai.org, you can do fun stuffs. One of our colleagues, you guys know, Chris Dorn did some very interesting stuff there a couple of weeks back. Wow, okay. Cause I, the only thing that I recently did with co-pilot was, and this is for any of our viewers that might not know, GitHub started charging for enterprise users of co-pilot. So I have, you know, admin on one of our GitHub accounts. So I now saw a new billing line item that is co-pilot out. So it's always fun. I have one for you, Joydie. I think we've talked about it before. My top of mind is a book called The Design of Everyday Things. It's by Don Norman. Have you ever heard of it? I've heard of it. I've heard of it. I think you mentioned that to me the first time, couple of weeks or months back, yes. Yeah, I, a dev ops person, a, you know, somewhere, some days are devs, some days are ops, most days are both. Did take some design classes in college, mostly human computer interaction, interface design, HCI design. But this was just a magnificent book with a wonderful book cover because it's a cover of a teapot and tea kettle. And the pour of the tea kettle, the spout of the tea kettle is near the hand, right above the handle, because it's not very intuitive. You're just going to pour it all over yourself. So I can recommend that to anyone and everyone who wants to learn a little bit about design. I have a friend who's a user experience designer. And I found, I understand a lot more of his day-to-day life and struggles having read this book on, well, if you hand me something, I can naturally understand how to use it. That's the key, yeah. Yeah. And speaking of things that you can naturally, that so many people really naturally understand how to use, I'm getting really good at the segues. Today, how is actually coming to not talk about some of the many other things he's built in and before my presence, like the clusterlet that we use in Red Hat Advanced Cluster Management. This is the man behind the curtain in many ways. But he's moved on and he's done some amazing work in Kubernetes fleet management with Ansible coming today to tell us how to use Ansible to reconcile and interact with and manage a huge fleet of Kubernetes clusters. I'm sure you've brought a demo that will break the bank. So are you ready to go? Hal, what do you give us the kickoff? Well, first time attending, I'm just gonna try to figure out my flow here. So, well, I guess let's, I guess let's just go straight into the demo, right? I don't know how many people use this Ansible to manage the Kubernetes cluster, but given the number of download counts on Ansible Galaxy for the Kubernetes module, I guess there is probably a fair bit of people, right? So let's see. Can I share my screen, Gernie? Yep, I'll turn it on just a second. It looks like you're sharing the screen that shows all of us, so I'm gonna avoid inception. How about that? There we go, no inception for the live stream. Awesome. So I have ACM cluster here that's managing a couple of AKS and EKS clusters. I'm just gonna go ahead and refresh that page. Okay, there we go. A couple of AKS and EKS clusters, right? Well, I first thought, well, like through my career working on ACM, there's one functionality that I really, really want. I kind of just want to directly interact with these cluster, the ACM managers. Like I want to be able to set Kube-CPL against it. I want to, any other tool, right? That is able to interact with the Kube API server directly from ACM. And, well, that also included Ansible Kubernetes module as well. So, I don't remember how long ago. I think about six months or so ago, a couple of community contributor for open cluster management contributed a really cool project. It established a reverse proxy from ACM from the managed cluster to ACM so that for a service on ACM, you're able to communicate with all the Kube API server on the managed cluster. And using this functionality, we built a couple of modules and ACM Ansible collection to allow Ansible to directly talk to those managed cluster. So here, I have a couple of cluster being managed by ACM and going into my Ansible Automation Controller UI. Let me log back in right quick. I ran a couple of setup script to set up a couple of things in this instance. All of the playbook that I used to set up the controller, the Ansible Automation Controller itself for the demo is in the demo repo that currently will post later. And we can go over that a little bit in a little bit. But the main thing here, I created a credential type for communicating with ACM Hub and I created a credential to this ACM Hub. I'm just been showing you. From here, I loaded a demo project and also the dynamic inventory that's in the demo project. So this dynamic inventory, I could go a little bit into the configuration of this dynamic inventory plugin for the ACM Ansible collection in a little bit. But all that it does is it allows Ansible to know about the cluster that ACM is managing. That's it, yeah. Thank you to interrupt you. So that means if a cluster, an additional new cluster is getting managed by ACM, this dynamic inventory will pick up the details of the cluster, right? That's correct. I can show that a little bit actually, in a little bit actually, we can try to dynamically add a client cluster to my ACM Hub and sync the inventory and see if it shows up. Hopefully everything works. So after the dynamic inventory is synced, you can see that the cluster that's being managed by my ACM Hub shows up within the inventory of Ansible. Now, each of these hosts represents our Kubernetes cluster. So one of the feedback that I got is this may not be immediately apparent for people who use this Ansible, but the host here represents the Kubernetes cluster. And one thought there, doesn't a host, and this is me being fairly naive about Ansible Automation Controller, hosts typically can also run playbooks as well. That's one of the, is that where the confusion comes in? That these are Kubernetes clusters, these are hosts, but they don't necessarily run the playbook on the client or is that? I think traditionally hosts represents like, a VM or a bare metal host or a network switch, like single entity that you interact with, right? And here, this technically represents a cluster, Kubernetes cluster. Okay. Different construct, that makes sense. And you also have to speak a different language because you're talking Kube API rather than running on a VM. Correct. And these hosts are being grouped by the dynamic inventory plugin. So let me show the inventory a little bit. Here is the inventory. I'm using the OCN managed cluster. Can you bump the font size a little bit, Hal? Sure, no problem. How about now? I'll give it one more. There you go. This is good. One more tick. We should be able to read it then. All right. There we go. Okay. So this is the dynamic inventory file that I'm showing you. So here I'm giving selection criteria and grouping criteria for those clusters. So for example, the cluster I have here that's marked Amazon is all the clusters that have the label Amazon. So you can group these based on the Kubernetes label selector that you put on your cluster, like the one that's here. So if you had like prod dev stage, you could have those labels and then just have your sets appear and be built in the Ansible inventory. That's correct. It's quite flexible. So anything that you can label, you can do it also from here, you can see that we also like exclude based you can also exclude based on label as well. So it's a fairly flexible mechanism for you to group your clusters. That's amazing. I didn't know about the exclusion, especially. How you're, where are those from the CUBE API server that you're communicating to? Connecting to. Let me show you a little bit about the host, right? So typically here is where you will, like in any traditional inventory, like a VM, this is probably where all the information about the host is stored, right? You know, like the IP address and like any additional information that you want to store. You can see that this is empty. So this dynamic inventory plugin is actually pretty dumb by design. It only provides the grouping mechanism. And it provides a pointer into the ACM system. And the other tools that we have built in the module allow you to use this pointer to further ask ACM about any information that you want around that cluster. Like, okay, what policy is applied to this cluster? What application is deployed onto that cluster? And what add on is deployed onto that cluster and anything else? This provides a pointer into ACM to allow you to query more information about the cluster. And how you started off with the kicker about the proxy stuff. Where is the proxy, what network tricks are you doing here? Well, we can get into that a little bit. Like right now we are kind of just showing how do I get Ansible to know about the cluster that ACM manages. Go with the flow, yep. Yeah, a little bit later when we try to connect to these clusters we will, I will try to show how is this being established. So, okay, and there's no credential, right? That's associated with any other managed cluster. I'm not storing like tokens or certificates or anything that's used to authenticate against those API server of the managed clusters. So, from here I want to go into a really, really simplistic demo. This is a playbook that I have in the demo project. It's really simple. All that it does is create a namespace on the targeted cluster, right? This is kind of basic. It's basic, but if it can do it six times for me or four times for me with the click of a button it's gone past basic. Right, and if it can do across all the clusters anywhere that I'm running Amazon Azure or whatever, that is gone past basic. I'm not that imaginative of a person when it comes to content. I'm sure people that's attending the string, if you have any idea about what kind of cool things you can leverage this technology for, please let me know. Play around with the demo and ping me. I have Gurney where he will give you my cell number. I would love to hear what kind of stuff that you guys will use this for. So, again, going back to the simple playbook, right? This playbook, this playbook already does is creates a namespace on the cluster that I'm targeting. I can target a group of hosts, the grouping that I showed earlier. So for example, I can target just cluster on Amazon or Azure or I can target one specific cluster by referencing the cluster name directly or I can just target all the clusters. So let's go ahead and do that. And as this playbook run, I can roll back the curtain a little bit and try to show what exactly are we doing behind the scene, right? So earlier I mentioned this cluster proxy add-on. What it essentially allows you to do is it creates a service on the cluster that's hosting ACM and having the proxy agent calls back to, connect back to the ACM so that for any user's perspective, you can connect to the service and have your traffic being able to reach the Kube API server. I think I have a picture for this. Hold on a second. Let me try to find a picture for this. All right. I think this is, can you all see this picture? Zoom in a little bit more, Hal. Okay. You'll have to use the actual Google's app. All right, up on that. There you go. Yeah. Yeah, that's a lot better. All right, cool. So there is two services that's hosted on ACM Hub, the user service and a proxy service, right? The proxy agent connects to the proxy service and the user connects to the user service. So I can set this my Kube CTL, the API server that my Kube CTL talk to against this user service with like slash cluster name and then that will allow me to communicate with the Kube API server on the managed cluster. One key thing to note here is that this is a reverse proxy. So all traffic, there's no inbound connectivity required to the managed cluster, much like most of the plugins that's on ACM, right? It's a managed cluster communicating back to the hub and not the other way around. Okay, so you've created a secure tunnel that is basically initiated by the managed cluster and handshaked by the hub. Now you have a secure tunnel and you can access the Kube API. And I'm gonna go ahead and guess you're about to tell us that step one of running an Ansible Playbook targeting all of your managed clusters is to configure communication through that secure tunnel you've created. We created a lot of, we created the utility to help with these. So if we take a look at the demo playbook, you will see how to enable the appropriate components with the plugins within our module. Sorry, with the module within the collection and how to set this up. It's pretty boilerplate-y. So this is the first part, right? We are enabling this cluster proxy add-on on the managed cluster. And then there's a second thing that we're using called it's a managed service account add-on. So managed service account add-on is another thing that came out of the open cluster management open source project. By the way, also in contributors, if you guys have not had the chance to check out our upstream project, please do. So the managed service account plugin add-on is exactly what it sounds like. It allows you to create service account on the managed cluster and using the token request API for service account to request service account token and feed that back to the hub so that you have a credential to authenticate against the managed cluster. Now, you can definitely use other methods of obtaining authentication, credential, to the managed cluster, but this is just one of the convenient ways that we provide. So you've so far just gone through, configured and verified a configuration of the proxy, and then you've gotten authentication to that managed cluster acting as the hub through a managed service account. And now you have how to contact the managed clusters, plural, and how to contact all, and how to authenticate with them in two steps, two quick, easy steps to the whole inventory. Correct. So here, the first part here is just enabling the add-ons. Second part here is obtaining the proxy URL. And then here we are generating dynamic managed service account. So this dynamics managed service account actually only lives for the time that this playbook runs. So afterwards, this service account is destroyed. And if, let's say, your playbook imploded in the middle of execution, right? There's way of setting exploration time for these managed service accounts that they self-destruct after certain periods of time. So you're not accumulating or leaking service accounts and credentials. And then there's another part that we ended up providing in our collection is, well, now that you have connectivity and authentication, the next part is we need to have authorization, right? Because at the end of the day, you're trying to do things on those managed clusters. So another plugin that another module that we provided is a module for you to create like our back resources on the managed cluster for the managed service account. This is done via the manifest work API provided by ACM. Okay, so basically step one, access step two, authentication step three, permissions. So now you have await access to service account and the service account has access to only the resources you need. And also you said that the service account self-destructs once you're done. Yeah, you can either explicitly delete it at the end of your task, which is what I'm doing here. But if you don't, you can set an expiration time for these managed service account and they will self-destruct after the expiration time. So in the case that you might have a lot of jobs that are using a service account, you can have a managed service account, recreate, you know, every 12 hours, every 24 hours to prevent bloat of a bunch of objects. Well, the plugin supports two different modes, right? One is a dynamic service account, one is a static service account. The dynamic service account have like generated names so that even if you have like 60 or 100 of these playbook running, they don't step on each other. They don't actually do each other a service account. Okay, that's amazing. So at this point, I'm guessing the step is do whatever you need to do against every cluster that you matched and then close out and that's your conclusion. But you said we're making a namespace. So have we made a namespace now? All right, let's see. I guess I should have shown beforehand, before the namespace was created, right? So I guess we can take a look at the timestamp and it should be fine. Here we go. It's created in five minutes, 11 seconds. Yep. Again, I completely understand that this is a really, really trivial example, right? If you're just creating namespace on a cluster, I can do that here, like with this. You know, broadcast in criminal. But I think the key thing to take away from this demo is that this give you, like you don't have to manage any credentials. You don't have to know, well, hold on a second. Let me, let me, let me, so. Can you bump the font size a little? I got four terminal going here. Yeah, we didn't test this ahead of time, by the way. Technical difficulties. All right. So you can see that like different cloud provider have different ways of authenticating, right? So for example, and they have different patterns for the API servers, it gets a little messy. Like, for example, the Kube config that's generated by AKS. Okay, that one is portable. Like you can give, I can give this file to JoyDeep and JoyDeep should be able to use it. But the one that's generated by AKS, hopefully I'm not wrong because I'm about to show the Kube config here. If it's wrong, please don't hack my cluster. See right here. The way that it obtain the token is actually run on exact. Oh yes, the I am authenticator. So it goes through is your actual AWS identity. Right, so in order for me to give this JoyDeep access to this cluster, well, actually I have to get him grant him access to my AWS account, which I don't think I trust JoyDeep enough for that. If I just want to give him a cluster to use and then I can destroy it later, right? So despite all like, even there are just as so many different other different plugins and different ways that you can authenticate with Kubernetes cluster, it gets a little chaotic. So yeah. Yeah, so what you're showing here is the core, the core pieces of, and this is plugged for last week where we talked about open cluster management. The community has made one homogenous way to access and authenticate and talk to this incredibly heterogeneous landscape. I can tell you we have an EKS cluster that is occasionally has an expired certificate and it is the only way we get into it is this exact method because otherwise the original person who made it on AWS has left the organization, their accounts no longer there, it's a headache. So that's amazing. Yeah, and the other thing that I see while you were talking, let's say I'm a seasoned Ansible admin. I don't know much about cool. I can use my same tools to handle now to do things in Kube and the greatest, the most important thing is all the security aspects, et cetera, are all being taken care of for me automatically. If I would do it myself, I would probably create a trail of mess, you know, trailer service accounts or whatever credentials would leak, et cetera. And the other point, of course, how did you set this? Clusters in different areas, in different clouds, et cetera, you have to handle out and out Z differently and things like that, that's all being taken care of. So this is way, way powerful. Yeah, dawning my SRE hat, if I need to give someone access to some clusters to do and do some operation, I need to do some task. I can write Ansible playbook that does that task, verify it against one cluster and hand it kindly to this collection. I haven't linked it yet. I need to link this actual Ansible collection to this collection and then it will just do the hard lift, heavy lifting for me. We have a question as well, how, and now might be a good time to talk about it. Scott asked, let me pop it up. Real word example, what if the cluster's busted? Let's say the cluster upgrade fails, never getting happy state. Can you use these methods to grab logs and remediate? And I think I'm a hazard to guess how tell me how wrong I am. As long as you can authenticate to the cluster through a service account, you can use this method to communicate with it, capture logs, remediate, do whatever you want. Well, one great thing about the Ansible ecosystem is that it doesn't just, it allows you to automate against, I think pretty darn much anything, right? So let's say the upgrades busted and my Kube API server is down. Yeah, even then we can do it. Well, if we can SSH into the VMs that's running this, perhaps we can get a deeper, or the hypervisor, right? Or the AWS can perhaps we will be able to gather like related information about how did this mess happen? So this is a potential like way of intersecting what's inside of a Kube and what's outside of your Kube. Because it's more flexible. Getups might be able to deliver some content to a cluster policy, might be able to configure some part of a cluster. This can configure the VMs in AWS and it can go out and SSH into a bastion node and use that or do a bunch of this extra behavior. And then this looks Ansible native enough that you could just start applying other Ansible collections to it as well. Like I know there's a K8s collection as well. Right, and like one of the use case I was thinking of that I didn't really have time to actually write it than no for. As for example, let's say I want to, I know that there is an impending meteor strike on, I don't know, US is one, right? I have a standby cluster on US is two. I can label my application and tell this, okay, go gather all the artifact of one cluster that's labeled this way. Now move it to my other cluster and then test if the new application is working. And once it's working, hook it up to the low balance to make sure that it routes to the new cluster or something like that. It allows you to coordinate things that's outside of your Kubernetes environment. Because- You can make changes to your load balancer and point to your East two locations of East one once you're done doing your work. That's some flexibility there. All in, I assume all in one playbook all applied to multiple different clusters. You might have a region that's all US East and you want to replicate it in US West. Well, congratulations. You write something that duplicates, you do that done and add those load balancer entries. That's amazing. Oh, by the way, you know that thing that you said earlier about like generating a Kube config and then give it to someone for a short duration of time. Yeah. Funny enough, that was one of the debug tool. As I was working on this collection, it was one of the debug tool that I created. So there's, I mean, at the end of the day, right? At the end of the day, how this work is essentially, well, get the information that's unique for Kube config and then call the k8s module with those information. Well, we can also output all those informations, right? Into a Kube config and have it work. So, no, I gotta pause there for a second. So what you're saying is I can use this managed service account tool through this Ansible Playbook to, for example, have an Ansible Playbook trigger off of some condition. Maybe someone opens a service now, take it against me and says, hey, I need access to this namespace on this cluster and I approve it and then this Playbook goes and it runs and it makes a service account that lives for 12 hours and gives them a Kube config and I don't have to touch anything. I just need to touch my badge and say, you're allowed to have one and this'll do the rest. Yeah, pretty much. So I'm gonna go ahead and run this. Although I haven't run this in a while. But yeah, essentially, this just grabs the information for ACM templates out to a Kube config and then I can send that Kube config to JoyBee. I'm gonna go ahead and show the Playbook a little bit and show how you can modify this to like give different permission to this service account and to modify how long does you want the service account to live for, right? So here is the Playbook. I think right now I have it generating a static, right, because this is one of my own debugging purposes. I trust myself, I think, sometimes on good days. And right now I'm giving it cluster admin permission, right? So let's take a look. So in this directory here, I have the row binding for cluster admin. I also have a row binding for, let's say, namespace management, right? I only want to be able to manage namespaces. So I'm gonna try to modify this live if it doesn't work. Am I being too brave, Bernie? No, no, not at all. This could save me so much work. I can give people service accounts that only last for 12 hours. All right, let's see if this works, right? So I am targeting this, by the way, this is definitely not rehearsed. I have a Scouts on it. So yeah, so I'm still creating a static one. I could change that to dynamic, but I need to make sure that I spell it correctly, temp access. If I'm using temp access, I needed to give another parameter here to specify like how long do I want this live for, right? So like right here, this is an example of, so this is the playbook that I ran earlier, right? To create the namespace. So in here, I'm getting a temporary managed service account. I'm asking it to live for 60 seconds. And let's make this short. Do you guys want to see the exploration or? Yeah, let's do the exploration. Let's save a lot of work for a lot of admins, I think. Yeah. Talk about giving leases, you know, you have a lease and your access will automatically self-destruct. Yeah. Hopefully work. And I guess we should say, you can probably run all of this in Ansible automation platform as well. So you can run it off in the cloud triggered by something or locally. Yeah, I could have uploaded this. All right. Oh, I covered it up. My bad. Please work. Oh darn. It's 10 seconds too short. Oh yeah, no, 10 seconds. I thought, yeah, 30 seconds. 10 seconds was too short. You got it typed up this time at least. And Gurni back to the floor that you were mentioning. I think that makes a lot of sense. Somebody creates a ticket and service now and all of this happens automatically. Yeah, exactly. You just approve. I'm envisioning, I had so many people be like, hey, I need access to a Kubernetes cluster for three hours. And it's like, well. I can see you making me the guinea pig. As you know, I have made a lot of cluster admin once in a while. For example, right, Gurni? Let's say we have a CI cluster that run into problem. Oh yeah, temporary access. Temporary access for like SRE or someone to go onboard it. Are you telling me that having cluster admin all the time isn't a healthy thing to give your SRE team? Well, so I don't know. Like with ACM, right? If you truly buy into treating your cluster as Cattles, maybe it's fine. But I doubt that. I mean, I don't trust myself. So this looks wonderful. I don't trust Gurni. I will tell you from experience that it is safer to log on with lesser credentials. I have seen two of best people make drastic mistakes, just accidentally. That's happened when you're working stuff out there. You got unauthorized in the middle of that. 30 seconds also is an option. Come on, 30 seconds. This won't work. Give it two minutes or something like that. Well, that playbook ran through within the time. So, well, one thing about like, one thing that this temporary access is meant to do is, like, you know how long your playbook typically runs for. And since it's a system that's running, you're kind of fine with like tightly scoping to the time period, right? Yeah. Did I say before I run? Okay, I'll say it. All right. Yeah, and also, is this the namespace, the role it's supposed to give you namespace access because you have to do OC get namespace, test it. Oh, that's true. That's very true. I actually tightened down the, Gurney, you're right, you're right. It's not because the time. It's because I didn't have it. It didn't give it permission to access cluster info and pods. I only gave it access to namespace. Now in 60 seconds, you should run this again and get unauthorized, right? And yeah, let's give it, I'm not gonna watch this for 60 seconds. So if I pan away from this window, will anyone think that I'm futzing with it on the back end or do we want to just stare at that clock? I think we'll trust you. We are doing it live. Yeah, your clock's live stream. We got it on the record. So yeah. No good. Oh, there we go. There we go. Self-destructing. Yay. So good way of giving temporary access. That is magnificent. And I can imagine in the same way you could, you could do a lot with that. You can do a lot with that time to live. So yeah, and now you have playbooks. So what you've also walked me through has given me the thought of someone requests access to a project on an OpenShift cluster, which is another common one. You write a playbook, you go out, you select the cluster that matches their selector, you create a project, you give them a managed service account that they can access for however many hours. And when you're done with that self-destruct, another playbook runs destroys that project. I can imagine that being a workflow that'd be amazing. Yeah, that's perfectly legit too, right? This is actually one of the examples that you can use. This is the RBAC configuration for allowing access to a specific namespace. This is actually something cool that I got working last night. I want to show a little bit. But yeah, you can create a namespace, right? Create a service account that only gets access to the namespace and whatever that the user need to, and then have another playbook to reclaim the namespace afterwards. Nice. So the last one question that's on my mind is, what does it look like to actually invoke these steps to configure the proxy to access a cluster and configure the service account? Because you showed us the service account step just a second ago. What does the proxy look like? Do you have to do anything for that or does that just happen? It's all in the playbook in the demo. Okay. And I can go a little bit into it. So like, by the way, please don't, so even though I'm working on Ansible right now, I'm still very much a novice Ansible. I know that my playbook are not very clean, not very consistent like syntactically, right? If you, please take a look, PR, help me learn more idiomatic Ansible practices if you are Ansible expert or heck, play around with this playbook, do hack it up and do the cool things that you wanted to do. But let's go back and take a look. So this is the playbook I ran earlier, right? There is a task that I created for getting the temporary access. There's a task for doing the business logic, which is just creating a stupid name space here. And then there is a task that always I execute, even though, even if this fails, which will remove the access garbage collect, although as you see earlier that since there is time to live, you don't really need to garbage collect if you are lazy. But let's go into this task and take a look. Do I really have to zoom in this much? Yeah, I think so. It's very small. Okay. All right. There we go. Okay, I'll see this. Yep. As long as you don't show something at the bottom of the screen, we can get it a little bigger. Okay, awesome. So this is all just modules within the Stolostrom Core collection. I think Ernie's gonna pop up a link to that collection. Yeah, I sent it earlier, but I'll toss it out again. Awesome. And like the collection itself provides a lot more utility than what I have demonstrated here. Like some basic functionality for configuring ACM and working with ACM, right? It's very, very much a work in progress in its early, early phase. The release version is 0.0.1 is how early we think it is. But it provides the utility for enabling features within ACM, so cluster manager add-ons, right? Enabling add-ons for specific clusters, so managed cluster add-ons and a couple of specific modules to interact with the specific function that I mentioned earlier, like plus the Foxy and managed service account. And that's it. So this playbook just, there's nothing that tricky here. So, interact with the hub, enable the add-on that we need, grab the information that we need and pass on the information to the playbook that does the business logic. Okay. So you can just basically, if you have, I guess this is Rackham out of the box, I assume multi-cluster engine and maybe the open source project can't guarantee that one. But if you configure it similarly, we're just using that technology, then theoretically you can just walk up with your Ansible playbooks targeted at Kubernetes and use this to access them natively and immediately and securely and temporarily to carry out remediations. And I assume all the eventing and everything's Ansible native. So if you're used to eventing around Ansible, triggering playbooks around ServiceNow or events or alerts, I know Joydeep Observability alerts are a big piece of it. So, responding to alerts and remediating in an automated fashion is probably a big deal. Now, we could talk a little bit later on event-driven Ansible stuff. I think it's a great topic for another live stream. I would definitely love to talk to Joydeep about it. And Gurney, of course. Sorry. Let me push you a little bit. I want to harp on something. If I'm running a kind cluster, okay? Running by ACM, perhaps. Yeah, it has to be. The kind cluster API cannot be reached from elsewhere. This can reach it, that's what you're saying? Yeah, this is a reverse proxy, right? So long as I can establish a connection, outbound from the managed processor to ACM hub, then yeah, I can. I'm about to do this live. This is very neat. Let me delete my old kind cluster. Well, while this thing spends and runs, I want, there's one, well, I think, I don't think this is gonna take long. Let me go ahead and create a kind cluster. And let me go ahead and go import that. This is a kind cluster running on my laptop. And there's absolutely no, I'm absolutely not exposing a kind cluster that's posting on my laptop to the scary border internet. You're just stepping up to the kind cluster and telling it to go talk to the hub cluster running and I guess AWS. Yeah, I should have probably spin up this cluster beforehand. Okay, there we go. There we go. And being that my laptop's running a kind cluster, it's running a little slow now. Time for a laptop upgrade. I know, right? I heard great things about the M1 laptop. I really want to get one. I have one in the closet. If you need to borrow one, I have my Fedora machine up now. If you're offering. I think that one has my work ID on it though. I'm okay pretending to be Gurney. Okay, so you're gonna import this now? Yeah, just paste gets a little slow. Oh, okay. So you've hit paste on the incredibly large payload that you need to paste in, okay. That was a fun fact. I actually designed that incredibly long string payload. I regret that decision. He only has himself to blounge kidding. I absolutely regret this decision. Could it be a curl or a pie? Just saying. Instead of encoding all the YAML in base 64 and then paste it to terminal and then pipe it to a batch script. I'm sorry. All right. Okay, awesome. Kind cluster's up. That always, that's incredible. I'm always shocked. And now you're about to show us an Ansible Playbook making a change in the kind cluster that's running on your laptop. But the Ansible Playbook's running on a cluster that is running in AWS. Just to get what you're doing live here. Sure, imagine this laptop is an on-prem cluster, right? No, I think it's more shocking is a laptop running in and I'm gonna pull back the curtain here. How being a personal friend, he is sitting in an Airstream outside of his house right now, so. In a pseudo Faraday's cage. So this is an edge device demo is what we're talking about here. Far edge. There we go. I don't know. I think I need Elon Musk's, what do you call it? Starlink, yeah. For this to be a far edge. All right, so, okay. Here's my inventory. It syncs, the kind cluster appeared. You know what? Let's just target that one kind cluster instead of running this all of it. Before you run it, also remember to, if you're gonna run the namespace create, go ahead and show us the namespaces before. So we know this is actually happening to your kind cluster. You don't trust me? Okay. All right, so no trick up my sleeve, right? Let's launch this. This is the wrong playbook. By the way, we need to save some time for this really cool other thing that I wanna show. Okay, we have a few minutes left. So it'll be the next thing we do. All right, awesome. Let's just run this on the kind cluster. It does have to pull down a couple images though. Yeah, your kind cluster specifically has to pull down a couple images. It's running on your desktop. Which is in a Airstream that is essentially a fair days case. So like my download speed is like, I don't know, a meg on a good day. Well, you said they cut your fiber yesterday, so. That's a whole nother thing. Oh my goodness, so it's actually. There we go. It's actually got it done it. All right. So pivoting to this other cool thing that I got working last night with the same technology that we've been demonstrating. So I don't know how much I know about like container groups. It's saying to assume zero for me. You mean Linux container groups? No, yeah, exactly. That's a problem I have with the naming. So the Ansible Automation Controller you can add instances to increase its execution. Right? Imagine, you know, a remote VN sitting somewhere that you get to like one the playbook on against everything else that's in your data center. Oh, whatever. Just additional execution capacity. And you can also add a thing called container group. But essentially container group in Ansible is just an Ansible Automation Controller. It's just a Kubernetes cluster that you can talk to, right? That you can dispatch your Ansible jobs to. And well, then the way that Ansible Automation Controller connects to the container group is through Kube API server. And, you know, with a token and with authorization to create a PAH, right? So, which is exactly what like the technology that I've been demonstrating. That's exactly what it did. Right, so hijacking the mechanism, right? Now I can add all the managed cluster. So you're about to say that you're going to in a failed swoop take by some label match, turn every single Kube cluster that matches that label in your fleet into an Ansible execution. So just for example, maybe you have a bunch of different regions that you're going to run Ansible jobs on. But networking latency is faster for you to talk from US East One to US East One, US East Two to US East Two, each region. You might have different regions have their own different execution units. And you can just configure all of that with one hub that knows about all of them. Yeah, essentially I'm turning every single cluster right now that's being managed by this instance of ACM into a container group that Ansible Automation Controller can use to execute Playbook. Thus providing more capacity to the Ansible Automation Controller itself and also allow you to be closer to where you are automating. And how are the Playbooks actually pushed into the execution engine or did it pull? So like much like how Ansible Automation Controller runs the Playbook on a Kubernetes cluster, it's just creating a par on that cluster, right? To run the Playbook. So this is what I just did. GIF Ansible Automation Controller access, well, sets up all the managed clusters. So like for example, creating a namespace and configuring all the prerequisite. So you will see a new namespace popped up on all the cluster including my kind cluster called Ansible Automation Platform, right? And so we create and configure this namespace and configure our back and image pull for the namespace so that Ansible Execution Environment Container can run on those cluster. And then I added these cluster into the instance group inventory Ansible Automation Controller, so that, let's try to do this on my kind cluster, right? Let's run the demo job right quick. And I'm gonna target this to one of the... Is this a hello world or something like that? Yeah, this is just a printing hello world. But you will see a container being created. This is a non-cook job that you're starting to run now on that. Oh, page two, there we go. Yeah, this is just running a hello world playbook. Let me set up the watch. I think it's, now I'm not running a playbook against the managed clusters. I'm running a playbook directly on the managed cluster against anything that that managed cluster can touch, can connect to. That's amazing. Also, I just noticed we're at the top of the hour and today we have another show immediately after this. So we're gonna have to call it and leave it on a, I think a bit of a cliffhanger for you, Hal. No problem, let me stop the share. Before we go, I highly encourage everyone that's here to please check out the demo repo. And try to play with it, modify it and DM me. Like, follow and subscribe. Awesome. Well, thanks Hal. Thanks for joining. Sorry to end a little abruptly. We're gonna end today. We've run over into supply chain security best practices. So catch them live here in about two minutes. Thanks to everyone and all the links should be in chat. We'll see everyone next week. No intro as outro this week. We're gonna head out. Thanks everyone. Thank you.