 All right, so let's begin. My name is Alexander Matushensov. I'm really happy to see you all here and happy to welcome you to our presentation. We're going to talk about using Cargo CD Core as a pure GitOps agent for Kubernetes. So I'm a long-time maintainer of Argo project and currently chief architect at Acuity, company that tries to move Argo to the next level. We have two speakers today. I will let my co-speaker introduce himself. Hi, everyone. My name is Leonardo. I usually go by Leo. I'm one of the maintainers of Argo CD and Argo Rollouts projects. I work as a staff software developer at Intuit. OK, let me start by presenting the agenda for today. I will start by providing you the problem statement, providing a list of use cases that Argo CD Core aims to address. But before jumping into what Argo CD Core is, I want to provide you a quick overview on Argo CD component architecture. I believe with this small piece of knowledge, things are going to be less magical and more logical. Honestly, I prefer this way. Maybe you do too. OK, then I'm going to present you, give a brief introduction into Argo CD Core. And that's the moment where Alex will join. The presentation showing you the use cases that I listed in the problem statement slide. And also providing you a demo. One quick note, think that in one of the future slides, Alex is going to provide you a QR code with the links to the GitHub repo that he's going to be using during his demo. So if you want to have access to that repo for future reference, maybe it's a good idea to keep your cell phone prepared to scan that QR code. Cool? All right, so let's get started. And yeah, in the project, we like to hear constructive criticism. And actually, this talk was based on a list of those. OK, and it usually goes by, Argo CD is great, but Feature X doesn't fit our use case to the point that for those users, they will consider alternatives, which is not a problem per se, but let's see what are those Feature X that some users are complaining. OK, so first one is multi-tenancy. So there are some use cases where, for example, you have cluster admin who wants to manage the old cluster and they have no intention in sharing that cluster with other teams, they don't need isolation. So multi-tenancy is not something that, for this type of use case, it's going to make a lot of sense. That next common complaint that we hear is the proprietary or back model, right? So if you know a little bit about Argo CD, you know that we ship the default installation with a proprietary or back model, which means that for you to configure roles and decide what those roles can do in Argo CD, you have to configure a dedicated config map with following a syntax provided by Caspin, which is a library that we use, which provides great flexibility, but for some users, they are willing to trade this flexibility for a centralized approach. They prefer to centralize all or back configuration in Kubernetes, which is a valid use case as well. Another complaint, proprietary API. So for some users, they prefer not to learn a new API, the one provided by Argo CD. They would prefer just to rely on GitOps and applying things directly in Git, in a push commit, and then just expect that the GitOps tool will take care of that state and apply in Kubernetes. OK, moving forward, YDC-based authentication. Sometimes some users, they work in startups. In this case, there are maybe three or four people working in the company. Why bother configuring YDC-based authentication? So maybe not interested in those cases. And UI and CLI, most of the people, the green majority, they like the UI and CLI that Argo CD provides. But sometimes it gets in a way that because some people think it's way too powerful, and they prefer not to make that available for the development team. So they would prefer to have that not available at all. This is all valid use cases. And we heard that. But honestly, for us, it was a little bit disappointed because Argo CD already addressed those use cases. So Argo CD version 2.1 was released in July 2021. And that's the first version that shipped with Argo CD Core. We recognized one major issue internally, which is the documentation that we provide was very, very minimal. So it wasn't really guiding users on how to configure and what to expect from this feature. But this is a problem that we fixed. So last Friday, we merged the PR in Argo CD repo. So now the official Argo CD documentation has a dedicated page with everything that we are going to be talking here today. Cool. OK. As I said, before I jump into Argo CD Core, I want to provide you a quick overview on Argo CD component architecture. And here, I'm presenting you a component diagram. This component diagram is grouped in four different logical layers. Why I say logical layers? Mainly because they don't exist in reality. The main purpose is to facilitate how you read the diagram. The first logical layer on the top, the UI, then right below the application, then the core, and then the infrastructure layer. So the way to read it is that think that dependencies, they go always top down, never bottom up. What I mean by that? I mean that components from the upper layers, they will be able to depend on any of the components from the layers below it. But the reverse isn't true. Cool. So an example of that, which is very easy to understand. Imagine if Redis, for example, had a dependency on Argo CD API server. It doesn't make a lot of sense. So the dependency never goes bottom up, always top down. OK. Real quick going into each one of those components and the responsibility that they have inside Argo CD architecture. Starting from the upper layers, the UI layer we have the web app and the CLI. I don't think I need to speak a lot about this. I think it's a sub-explanatory. Web UI is the web interface that you probably were already interacting with. The CLI, Argo CD CLI. So those two components then depend on Argo CD API server, which is the component that we have in the application layer here. So the Argo CD API server main responsibility is to orchestrate all the functionality exposed by those two components on the top. So everything that is there in Argo CD UI and in the CLI is powered by Argo CD API server. Cool. It orchestrates things. Then right below it, we have this core layer where I'm presenting you the application controller, which main responsibility is to reconcile the application CRD. It does a little bit more, but just to make things simple, I would just mention the application CRD. So basically, in the application CRD is what get things applied in Kubernetes. Then we have the application set controller, which is responsible for reconciling the application set resource, which is what creates application resources. It's just a way to automate things more nicely. And in the same layer, we have the repo server. Repo server main responsibility is to interact with Git. So it extracts the desired state from Git, generates the manifest in case of customize or help. It will generate the manifest and send it back to whoever executed that operation in case application server or application set also depends on repo server. At the bottom layer, I'm presenting the components that Argo CD default installation depends on. So here we have Redis. Those are not components that we develop itself. It's just that are available in the default installation. So at the bottom, we have Redis, which is responsible for having some level of cache. So Argo CD will rely on Redis to avoid hitting badly QBPI server, which is a good idea. Again, QBPI, as I said, we have to apply things in Kubernetes. So we have to rely on QBPI. It's another component that we depend on. Git, obviously, because that's where we get our state from. And DEX, which is the component that is responsible for the authentication. Cool. Two aspects I'd like to mention about the component architecture. First one is modularity. So someone, for any given reason, could rewrite repo server if they wanted. And as long as this new repo server exposes the same interface that the current repo server does today, it will be very, very easy to just replace it without requiring any change in any of the components that depend on repo server today. That's the modularity that we have with this component architecture. The other example, the other aspect that I wanted to highlight from this architecture is this. If I'm not really interested in a group of functionality that a given component exposes, I can just shut it down or not deploy it at all. And that's exactly what Argo CDCore is. Argo CDCore is nothing more than special Argo CD installation that will keep deploying some of the components that are exposing some functionality that is not required for those use cases, those use cases that I explained at the beginning. OK? And with this characteristic, we have obviously less functionality available. But for the components that remain, they will still have their, they will still be operational. OK? All right, so in what is, and how can someone install Argo CDCore? So in Kubernetes, sorry, in Argo CD GitHub repo, we have a folder called core install. There you can find customized project. And in this customized project, I'd like to highlight this list of resources that you're seeing here. So what at the end of the day will get installed by this specific customization, right? CRDs, Orbex required by the controller, configurations, and then the four components that I showed in the previous slide, right? The application controller, application side controller, repo server, and Redis. Cool? OK, so enough of theory. Alex, this is the point that Alex will join the presentation, and it will show how this thing works in real life with your Alex. Thank you for a great overview of the problem and explaining how Argo CDCore can help. And so the next part of the presentation is going to be a bit more practical. So we will have a demo. There is a one known issue with every Argo CD demo. It's usually fully GitOpsed, and there is nothing to do. Basically, you just need to apply a set of manifests and then Argo CD takeover and perform everything that has to be done. This is why I still have two slides to explain to you what we are going to implement and explain the idea, and then we can switch into the demo itself. And so long story short, there will be two parts. First, I want to show you Argo CDCore in action, and the reason is Argo CDCore is a kind of headless Argo CD installation, but it's not just a YAML file which installs less number of components. There are also features, and those features help you replicate experience of the full Argo CD without actually running full Argo CD. I will show you that. And next, we will talk about how to use Argo CDCore practically. And reason for this topic is Argo CDCore meant mostly for Argo CD administrators. And so Argo CD administrators use it to bootstrap clusters, and I will try to address config management problem. And by configuration, I mean applications that needs to be configured in Argo CD, so it knows what exactly you want to deploy. And so because Argo CDCore is used by administrators to bootstrap clusters, it is typically installed in managed clusters itself, which makes it a little difficult for administrators because now they need to jump back and forth between multiple clusters and use QCTL to configure Argo CD. What makes problem even worse is typical cluster administrator team manages way more applications than average application developer team. Instead of managing five applications, let's say cluster administrator might manage 100, which pretty much means it's not an option to use QCTL. And we want to offer a solution of this problem. And in particular, we want to leverage GitOps as usual. And on this slide, you can see Git repository that kind of proposes a convention that allows administrators to only make Git changes. So administrator would need to bootstrap Argo CD once and then rely on component called Argo CD application set to reconfigure Argo CD every time when this Git repository has any change. And to make it a little bit more clear, we broke down kind of the convention that we are trying to implement into two use cases. And the first use case is called cluster addons. And cluster addons refers to a homogeneous set of applications that needs to be installed into each and every cluster. That's a super common use case. And idea is administrators usually provide a set of critical components installed in every cluster and provide them as a service to developers. And components usually include things like Carfana, Primitives, Argo CD itself, and things like that. And the use case of cluster addons we propose to solve using this base directory. And I will try to quickly explain the idea here. So what we want to do is we want to let administrator to bootstrap cluster once. And then in this bootstrap configuration, we will have application set that watches continuously this repository and look for subdirectories under the base directory. Every time it notice a new directory there, it will create Argo CD application with the matching name and use manifests from this folder to serve that application. And so imagine if you have a hundred clusters that you bootstrap all of them once and then you can forget about those clusters and simply make changes in one repository, introduce folders, remove them, and then application set will keep reconfiguring itself and it will be deleting, removing applications. So I assume this use case is very useful for everyone, but we heard from our users that in real life there is a little bit of complication. Typically, yes, you install same set of applications everywhere, but those applications rarely have exact same configuration. And we kind of, we try to name this use case, I call it cluster groups. And idea that often we want to logically group our clusters and one example is you might want to split your clusters into test clusters and production clusters and test clusters maybe get new versions, upgrades a little earlier than production. And obviously we do not want administrators to queue CTL, access a dozen of test clusters, reconfigure them one by one and then rework the temporal configuration. We want to leverage Git for it as well. And so this Git repository kind of have a room for this use case as well. So here we have groups directory that represents groups of clusters. Each sub directory is a group. And so idea here that administrators can bootstrap all test clusters with the manifest from group test. And next, let's say administrator wants to upgrade traffic component. To do so admin would need to create this directory and name important here. It has to be named traffic. And then application set should notice it and it should switch traffic application in all test clusters to this folder. And basically if administrator does a good job and put appropriate manifest there then traffic will be running a new version. So is that enough talking? We can see it in action. Let me switch from screen mirroring actually to screen mirroring so you could see. Awesome, yeah, you can see my screen. Yeah, we can see that. Let me stop this and stop this. Okay, I hope you can see my screen well. So what I was going to say is I hope you followed me. If not, don't worry because we have this awesome Git repository. And it kind of describes everything that I just said and basically it implements the idea that I was proposing to implement. So I hope you had a chance to take a screenshot and the presentation itself will be available in the talk profile page. And let's go ahead and open the repo. Few words about the repository is as I said it's kind of a repeat of the same presentation that I described. It has a set of steps that explaining what we are going to do. It has list of things that you need to have before you run this demo. It's not a very long list. So we have here kubectl customize argocdcli and some Kubernetes cluster. It doesn't really matter what kind of cluster. I'm using K3S. You can use kind or mini-cube. And that's it. We can go ahead and do the demo. So, and I already gave you a heads up. Basically, this demo is like super GitOps. There is not many things to do. It's just to show. And I need to run one command. I will run it really quick and then I will explain you what I just did. So I applied a set of manifests and I bootstrap my K3S cluster with the base set of manifests meaning it's not part of any group. It uses just the base configuration. And this base configuration it's represented by the manifests in the base directory of my Git repository. And so here we have three folders. One for argocdc itself, grafana at traffic. And so expectation here is my cluster should magically get three applications including argocdc itself. And there will be also grafana and traffic. And so as I was describing, oh, and let me actually show you what the argocd installation is. So argocd installation is described in this customization.yaml file. And so as Leo mentioned, we have in official repository, we have a set of manifests specifically for argocd core. And this is what I'm using in my customization.yaml file. And plus I'm also, I have an application set that implements the magic logic that I was trying to describe during the presentation. So I will quickly show you application set. And it has around how many 51 line of yaml. So it's a little bit complex, but trust me, it's not that difficult to understand. We don't have too time to actually learn how exactly it works, but trust me, it's not that difficult. And it's pretty powerful component. In case you want to improve this conversion driven setup, you can definitely do it and leverage application set to do the work. So we've got the idea of what we just installed. Let's see if it actually worked or not. So I'm switching back to my cluster. Hope you can see my screen. And so we installed base, so core argocd, there is no UI, no IPI. So the best we can do by default is to just use kubectl to get list of applications. We've got three of them, which is good. So application set did its work. It created a bunch of applications. One is broken for some reason. So traffic is, for some reason has unknown status. And next we want to maybe figure out why did it broke. So we can try to use kubectl again to get the yaml output. Sorry, yaml. So it works. I already know what is broken, but it's kind of, it's not the best way to explore what's happening in argocd. And now it's time to teach you about one of the features that supports argocd core. So argocd-cli has a magic option. Basically you can teach the cli to assume that there is no IPI server and it just needs to use Kubernetes itself to get the metadata of argocd. To do it, you just need to run argocd login command that probably already are familiar for you. Instead of giving an IPI server URL, you can just provide this flag, argocd-core. And that's it. So cli knows it needs to talk directly to Kubernetes. And now I can use a little bit more powerful kind of cli-based UI to get information about applications. So I can, for example, list my applications and get more meaningful output. So traffic was broken. We can check more details about traffic. Hopefully this is a bit more easy to understand what's happening with the traffic. But it's still, it's not perfect because there is also a user interface. And now it's time to show you how to get access to the UI, assuming that we have no IPI server. And it's basically, there is not much to do. You just need to run argocd admin dashboard command once it is executed. It just starts an IPI server on your local host and IPI server configured to use no RBAC except Kubernetes RBAC. So if you as administrator has access to that cluster that run argocd, that means you're able to access this user interface and there is nothing to configure. And so here I just opened the UI that runs on local host. It has three applications. Argo CD itself, Grafana and broken traffic. I kind of I know why it broke and knew it will happen before the demo. The reason is when we kind of bootstrap Argo CD itself and all those applications, Argo CD needs time to start. And the application will be, it will remain in a broken state for like three minutes or I can click just refresh button to fast track it. So yeah, it, I told application Argo CD to kind of check it one more time. It did it and it immediately synced the traffic. So that's it. Now we have Argo CD. It has no UI, but I'm able to use it just like a normal Argo CD using CLI and UI. And now it's time to see if we're able to solve the second use case which is groups grouping of clusters. So I kind of for the sake of time I decided to just not introduce any new applications and overrides. Instead I'm going to reboot strap my cluster and kind of assign it to group, the clusters group that I'm calling test. And here I have override for Grafana application. And what that means is I just want to make sure that Grafana now is going to use different set of manifests which is specific to the test cluster. And to make it a little bit more self-explanatory. So here is my Grafana application. It has a path in my Git repository. Currently it points to the base which proves that it's basically using this base set of manifests. So next I just need to kind of reboot strap my cluster and assign it to the test. To do so I need to apply test manifests of Argo CD. And then Argo CD and application set is going to know that cluster belongs to the test group. So if we switch to the user interface you can notice that Grafana was doing something. It was basically upgraded to a new version and the path changed to the test. So and that's pretty much it. So we solve the use case. Now as an administrator I pretty much never, I have no reason to access my Kubernetes cluster to reconfigure it and create or delete applications. I can do all my operations in Git and I no longer need to run user interface. So I don't need to protect it. I don't have to set up any groups and any SSO authentication and so on. So I hope you enjoyed the presentation. We really value your feedback. This is your chance to give that feedback. So let's carry this QR code to access the UI that lets you provide feedback and please ask us any questions if you have any. Thank you. I'm supposed to I guess choose who. Yeah, go ahead, please. I think you had a question. Sorry, we don't have a mic, I will repeat it. Hi, to understand the purpose of CD core and usual Argo CD you would recommend to use CD core for production to have less way to secure to protect? Basically, Argo CD core idea that we recommend to use it for cluster bootstrapping. So in case of application, if you want to serve multiple application developers teams I would still recommend to use centrally managed Argo CD because that will serve application developers better. They will have a single glass of paint that show them application across multiple clusters. And then as admin, you might choose to use Argo CD core installed and managed clusters to manage system add-ons, kind of cluster add-ons. Thank you. That's because you mentioned that advantage is not to have the burden of SSO, but other way SSO can be used fully depends on the team and if the target is a bunch of developers or only admins, yes. I think cluster administrators is the best example of a team who already has full access, good level access to all the clusters and this is who we hear a lot from. So those people basically kind of explaining that hey, we don't need SSO, we have cluster access, but we just want to have a tool to manage clusters. Yeah. Any more questions? Thanks for the demo. So I have a question. How do you solve the dependency between two components when you're installing a cluster? Because just to give you a context, we have a cluster, but we have like 10 to 15 different components like cert, manager, leds and script and tons of different of those. Sometimes there are dependencies between the component that you need to manage, but we solved it differently, but I was just trying to understand if you use ARGO CD as an installation, how do you do that? Yeah, that's a known problem and there was a progress. So the most recent version 2.6 has a feature in application set. It's called throughout strategy. So you can literally define dependencies between applications and application set will respect it. It will sync the applications in the order that you specify and it would wait for first wave to be healthy and then it move on to the next one. The only caveat, it's like alpha feature. Basically, it's been available and we're waiting for feedback from users. We're going to fix bugs. Once we get enough feedback, we'll mark it as ready for production. And the other, yeah, thanks for that. Now we know. And the other thing like you said that you need a controlled group, like let's say if you want to upgrade the Grafana and not changing the configuration for all the cluster, then you use the different group and then you point the application set to choose from that path. Invariably, if you like, just a suggestion or maybe I just wanted to validate my thought. But if you have everything in a base, but then on top of base, if you have another folder called patch, which you have apparently, like you have a Grafana folder in the patch and you have just resource YAML in this, which means that you have the only thing that you need to override, which means that you want to upgrade. You cannot point the exact path to that patch to from start, from get go. So maybe I misunderstood the question, but let me try to answer and correct me if I got it wrong. So basically what I was trying to demonstrate, I think exactly that. So application set, assume that everything in base is the base and then what's in the test group, those things are only patches. So if you introduce, let's say, patch for traffic, it won't touch anything else. It would only update traffic. Yeah, what I didn't implement, but it's implementable, I did not implement deletion of applications. It would require a bit more complex application set, but it's 100% possible, yeah. Yeah, no worries, thanks. Thank you. Any more questions? Hi, and yeah, thanks for the presentation. Question, multiple clusters. Is there a preferred way to manage them? Is it multiple, RGCD installations, central installation, or anything else, control plane? Yeah, does it? I think there is no officially preferred way by the community. I can tell you my opinion. So just from experience, we worked with the team and it was preferable to have a single cluster that manages multiple clusters. And it worked well for us because anyways, we already kind of solved authentication problem and it just fit well into the way we were using RGCD. At the same time, it was a pain to scale it because that was like a monster RGCD. It was consuming like 40 gigabytes of memory and managing like 350 clusters. And then so if I started it from beginning and we didn't have context, I would refer to install RGCD core into each and every cluster and manage it the way I just described right now. Yeah, so, but, you know, with the context that we had, it just fit better into the workflow. So I didn't really answer the question. I guess both ways works and only one hint is if you choose to use centrally managed RGCD, you will have to deal with scalability issues because it kind of implies this instance will have to manage a lot if you have many clusters. Yeah, and like single point of failure argument as well. Yeah, that's true too. Okay, thank you. Thank you. I don't think we have more time for questions, but we have one plus sheet to give away. Anyone interested? Okay, he... You were the first. Thanks everyone.