 Mae'r gwaith yn ArgoCD Plugins a ddau'r gael. Mae'r gwaith yn Angen Cllwcus, mae'r gael ddiolch yn ymddiol llyfr. Mae'r gael yn cyffredinol perthesion, mae'n fawr argo'r gwirioneddau. Mae'n gael yn ddiolch yn y gael, mae'r gael mae'n gael argoCD. Mae'r gael eich gweithio argo'r gael, mae'n gweithio iingu i cwbionetau, I wanted to get ops and getting into ArgoCD and became co-author of the ArgoCD lovely plug-in and hence wanting to talk about a bit about how plug-ins work in ArgoCD and a bit about how I would like them to work which is as services to make them easier. Let's move on. So what are plug-ins? Config management plug-ins specifically. Config management plug-ins are the component in the ArgoCD repo server that you can plug into to change how it morphs the stuff you have in Git into Kubernetes YAML. If you were in this room about two hours ago you would have been told that this was a really bad idea. So the talk on the rendered manifest pattern is basically anti-using plug-ins or config management plug-ins in most circumstances. So what can we do? You've probably all heard of Helm, used Helm hopefully. Helm is a thing that morphs something that's not quite Kubernetes YAML into Kubernetes YAML. And you may well be using that in ArgoCD. That's not actually a config management plug-in in most normal uses of it because it's built-in to ArgoCD. A completely different type of thing that you might use is Hera. If you've been in any of the workflows talks by Pipekit they've probably mentioned Hera. At some point Hera is a Python SDK for writing Argo workflows with. And you could write yourself a plug-in that morphs Python in your Git repository into Kubernetes YAML. In fact ArgoCD lovely plug-in that I talked about earlier can do this for you. When you're using a config management plug-in you can access it directly by name so you can specify this particular ArgoCD application needs to use this particular config management plug-in. And it can be a version of that plug-in or it can just use whichever one it finds. You can also ask the repo server to discover plug-ins by which what that means is each of the plug-ins in the system will get asked can you handle this particular directory within the Git repository that the application consists of? And if it says yes then that plug-in will be asked to render the manifest for you. As part of configuring a plug-in you write a YAML file called plug-in.yaml. I'm mentioning that here because we're going to talk about it a bit later. So the current mechanism for using plug-ins in ArgoCD is to stick them onto your repo server as a sidecar. This means that you've got one pod with the repo server in it and then for each of the plug-ins you've got a sidecar sitting inside the same pod at separate containers. It's nice because they're isolated from each other and they discuss how they're going to do this rendering using a network protocol called GRPC. So that already exists. So the thought was we've already got a network protocol. Why can't we just take the plug-ins out of the repo server? Find them by looking them up as a service and then back that with whatever we like. I'm going to suggest mostly you would want to do that with a deployment but you could back that with anything you like. It could be a state-for-set. It could be an external service completely outside of Kubernetes. I don't know why you would want to be outside of Kubernetes but that's your call. So given that thought and a problem that I think it will solve by moving things out of the repo server is the problem of plug-in installation. So my agile story is as a plug-in author I'd like people to be able to play with my plug-in easily and uninstall it easily. It's not trivial. The stuff on the right there is the instructions for how to configure a sidecar. It's not terribly hard but it's all bundled in to one big yaml manifest for a single pod with lots of containers in it. It's not entirely apparent especially if you weren't personally installed it, how you might uninstall it or how you might upgrade it or how you might fix problems with it. Instead we could have a helm chart. I'll go see the lovely plug-in could have a helm chart. You deploy this, it deploys the service, the deployment and then you've got lovely which is lovely. It could be a customized manifest. It could be whatever you like but at least it's separated. It can be installed as a separate ROCD application in that case. Hopefully this would improve the understandability of plug-ins within ROCD because manually installing sidecars is not a common pattern within Kubernetes in my experience. Installing helm charts and customised things. Yep, we do that all the time. It could also help with plug-in development. Kubernetes and Argo are sort of Lego ecosystems. They're building blocks on which you build more great stuff. As Argo in general has gotten bigger, more complex, it's harder for the maintainers to maintain all of the cool things that people would like to do within the Argo ecosystem. People keep wanting to add more stuff and plug-ins in general across the ecosystem, plug-ins within workflows, plug-ins within CD, plug-ins within roll-outs are all things that are being used to take that out of the core of the system and allow you to develop things that do the particular thing you want to do. It's currently sort of hard. You have to have admin rights in order to put sidecars on to your repo server. It's not necessarily a problem because you might just do that locally. But your iteration of slay is continually restarting your repo server. Again, you can get around that without, you can not do that. But the normal pattern for development of these things would be a sort of restart within a container just to pull in the latest version of what you've written. I'm going to go over a few downsides of this model. First, you're going to have to transfer your repository from the repo server to the plug-in. This happens every time you want to render it because it's got to have a copy of the plug-in. Large repos aren't particularly recommended for RACP anyway, but previously it would be doing this between two containers within a pod, pretty rapid. This will be slower. It's going across probably real network between two nodes in your cluster. And discovery is identical. It has to transfer the repo between the repo server and the plug-in in order for discovery to do its job. So it's going to be slower too. And the repo server is really nice from a Kubernetes security point of view. It has no access to Kubernetes, but we are now saying we're going to need to give it read access to services in order for it to discover stuff. I think the upsides make up for this. We're not going to remove sidecar plugins. They can live alongside each other. So the upsides, you can scale these things independently. If you're only using a plug-in for 1% of your services, or your applications, you might only need one copy of your plug-in, but if you've got 10 repo servers, it's going to be stuck onto all 10 of the repo servers. Maybe you get burst load on that plug-in. You can use a horizontal pod auto scalar for that. If you've got lots of plug-ins in your system, I don't know if this is an actual use case where people do have hundreds of plug-ins, but you're having to allocate them, jam them onto the side of every repo server. This scaling problem, this should help with scaling and should help in a way with performance, because you could have one small repo server and lots of plug-ins running alongside them. Running plug-ins on spot nodes becomes more viable in most cases, and you've separated out your life cycles of the plug-in and the repo server. If you upgrade the repo server at the moment, it takes everything down, takes everything up. That's going to still happen, but if you upgrade your plug-in at the moment, you'd have to take your repo server down to do it, whereas they can do a nice rolling deployment if you've got them separated. Going back to the understandability point, it's more normal. People understand a deployment. They understand putting metrics on a deployment. They understand monitoring on that. Therefore, the Helm chart could have a service monitor as the Prometheus service monitor. Part of it, then you can monitor it. A problem I've hit in reality is GitHub. Container registry being down. Argo is hosted on key. Quai, whatever you want to call it if you're American enough. We couldn't restart the repo server because GHCR was down. They're separate, so it might be better. We've gone over a bit about plug-ins being developed. You've got separation of concerns. It's easier to understand what's going on. You could give your plug-ins different service accounts. You can't do that within a single container. You could give it some very naughty service accounts permissions and do very evil things. You could make yourself a stateful plug-in. Don't come and talk to me when you're crying about all of that stuff. You can do things and it's safer if you're doing it separated from the repo server. Lovely plug-in was originally conceived when we got a bit upset about Helm and Customise not working very well together. It was originally conceived as do what you hope would happen with this bunch of stuff in a repository. It's grown to be a bit of a UNIX pipe joining together various transformations, so you can pipe the output of Helm into Customise. You can then pipe that through SED if you're so inclined or some other system to pull secrets in, which is again frowned upon pattern, but you can do it, so it's great. This is where not having my laptop is going to pull us down a little bit because I'm not going to be able to demo some of this stuff, which I was hoping to do today. In the plugin.yaml, you define the name of your plug-in, you define the commands you're going to run, and you define what parameters that plug-in takes. In order to turn a plug-in from a normal plug-in into a plug-in as a service, you just need to tell it that you want to start listening on a port. That says I'm going to start listening on port 8080. You're running that in a deployment now, and it's listening on port to 8080 for GRPC requests. The plug-ins run as the server component of GRPC, so the repo server will discover your service. The service has a label on it. Again, that mode is broken, can't show you. Then it can discover all the services with the label that says I'm an RGACD plug-in, and it will query them about what capabilities they're going to have. It does so using a secret to ensure that they're all authenticated. If you could all cross your fingers, I'm still not going to be able to do the demo. I decided to try this out on somebody else's plug-in that I didn't change. Here is a plug-in that does Helmphile manipulation. If you haven't come across Helmphile, Helmphile is a super Helm chart. If you don't like their plug-in, lovely does it as well. I'm not trying to sell lovely here, but it is lovely. You add the listener address into the plugin.yaml, and you have to somehow get the conflict management plug-in server component from RGACD into the image that you're going to run. Here I'm copying it in. You could volume mount it. That's how it works in a sidecar system, but that's the entire Dockerfile to convert somebody else's plug-in into one that works as a service from a sidecar. This stuff is all in a pull request. That number up there tells you where it is. If you'd like to get involved, shout at me. Tell me how bad my code is. That's where you can find it. We could then go on to separate out Helm, customise JSONet, the built-in capabilities of RGACD, repo server into separate plug-ins. A core part, but running separately. This feels like a potential future path. Lovely is written in Go, so I would quite like to separate out the CMP server component so that I can just build a single Go binary. Then there will be a Go package. You could start developing the Golang plug-ins for RGACD, knowing that most of your stuff is going to work sort of more out of the box because there will be just a pattern for doing it. I mentioned before I work for Pipekit. We're Argo experts and maintainers, and we've got contributors to both workflows and Helm charts, and now me doing some CD stuff. We make a control plane for workflows, so come and talk to us on booth E34 about anything we've talked about today or workflows, and we run some regular office hours where we talk about infrastructure stuff, or sometimes just the weather and chairs. There's a QR code if you'd like to sign up for office hours. Does anybody have any questions?