 We're going to get started in a couple of minutes. Thank you for coming. Okay, great. Let's get started. Hello, and welcome to my talk on multi-tenancy for Argo workflows and Argo CD at Adobe. I'm Srinivas Maladi. I'm a software engineer at Adobe who works on the internal developer platform at Adobe. So before I get started, I do want to give credit by emphasizing that the work I'll be preparing today is not just a product of my own efforts but a combined product of efforts of many of my colleagues at Adobe. So with that, let's get started. So this is our agenda for today. We will be starting with some background information about what my team does, some terminology referenced in this presentation, and an architecture overview before we dive into how we address multi-tenancy for Argo workflows and Argo CD. Let's get started with the internal developer platform. So Adobe has three main categories of product offerings. There's document cloud, creative cloud, experience cloud, which are themselves comprised of different services and products that are developed by various internal teams at Adobe. These products and services may be deployed and run using different internal platforms that serve their specific needs. And in order to build and deploy and run successfully, these different platforms leverage certain core mechanisms, infrastructure tooling, and services that are provided by the internal developer platform. As part of its offering, IDP, internal developer platform, uses and provides access to resources on various cloud providers like AWS, Azure, and Adobe data centers. The internal developer platform standardizes best practices and consolidates engineering efforts across the various internal developer teams at Adobe while providing a CI CD experience that remains flexible for the different use cases you see on the screen here. Next, let's go over some terminology that we will be referencing for the rest of the presentation. GitOps. So GitOps is an architectural paradigm where desired system state is first defined and tracked and get somewhere by some tooling. That tooling then deploys that defined state to a live state on a running system. The GitOps tooling regularly synchronizes the two states so that the live state can get automatically pulled in with any updates, and it pulls in any changes made inside Git. Argo CD is an example of one such GitOps tooling. It supports tracking of Kubernetes, rockets, manifests in Git, and supports their deployment and synchronization to, for example, a namespace on a cluster. Argo CD is implemented as a Kubernetes controller. It uses various CRDs. One of them is called an Argo CD application. And that stores information about what defined state to track and where to track it, as well as where to deploy it. So here's an example of what that looks like, the YAML manifest for an Argo CD application. As you can see, there are sections in the manifest defining the source and intended deployment destination of the defined state. And Argo CD itself supports automated deployment, self-healing, complex rollout strategies, monitoring, and much more. Through the Argo CD UI, which we will discuss more later, the Argo CD also visualizes the health and deployment status of the developer's deployed resources wherever it's deployed on the cluster. Argo workflows. Argo workflows is a workflow engine that can run CI CD pipelines, among other things, on a case cluster. There are many different kinds of workflows, but generally a workflow can be thought of as a set of tasks. And each task runs in its own pod and can also be modeled as a dependency on other tasks. The resulting workflow can actually be templated and defined using YAML files like this on the screen. And this demonstrates a directed acyclic graph type template. It's an example I pulled online. The use of templating opens the door to sharing a template to be used as a sub template in other workflows. Since a workflow is implemented in a Kubernetes CRD and each task runs in a pod, for example, this means that workflows natively integrate with existing Kubernetes objects, such as volumes, config maps, secrets, and much more. Now let's get a better understanding of the CI CD offering that IDP at Adobe provides and what it looks like with Argo workflows and Argo CD working together. So we start with the hub cluster, where we have Argo workflows and Argo CD installed. We also have client namespaces where client workflows are run. Additionally, there are remote clusters with remote client namespaces that are running client applications alongside any case resources that they need to support it. These remote clusters are usually pulling an image from some registry to run as part of the deployment. Now the defined state for all of this is actually stored in Git and using GitOps, this includes, of course, client repositories, which includes the application code as well as a deploy repository. The deploy repository contains CI CD resources as well as resources associated with any of the case resources deployed on the remote namespace. So you have your service case manifests and, of course, you have your workflow manifests, which are the CI CD resources themselves. So these repositories may also refer to other IDP-owned resources that they can refer to and we do this through Helm and we'll talk more about that later as well. So how do these pieces connect? So to start with, Argo CD is always tracking the client deploy repository manifests and it's doing this because it's constantly deploying them to the workflows on the hub cluster as well as the actual deployed resources on the remote cluster itself. Following this setup, any change made to the application code will trigger a workflow inside the client hub namespace and specifically what actually happens is this requests a workflow from the Argo workflows controller that's running and the Argo workflows controller will execute a workflow in the namespace that requested it. So one of the steps in the workflow is a build step and that actually builds and uploads an image to the image registry that we mentioned earlier. The other step after that goes ahead and takes that tag or whatever information about that build and enters it into the service case manifest so that it's recorded somewhere in Git. Now as mentioned earlier, that state is constantly being tracked by Argo CD and so that information is then deployed to the actual running state on the remote cluster. So now the running state knows where to pull it's the new image tag and it pulls it from the image registry that it was pushed to in the first place by the workflow. So in this way, Argo workflows and Argo CD are used to allow our clients to run CIC pipelines on the cluster, with a focus on GitOps. So now that we've gone over the general walkthrough, the general architecture, let's take a look at how we make this multi-tenant. So multi-tenancy, it's an architecture where multiple clients are sharing a single resource. In this case, it's a running instance of Argo workflows and Argo CD. It can reduce waste by reducing idle time as well as a number of running instances depending on how you manage load and how scaling is handled. It can save developer time, mainly because it allows them to spin up faster. They don't need to sort of spin up their own instance because they're using something that exists already. It provides an opportunity to consolidate best practices, fixes, maintenance and other updates and that reduces the need for domain expertise from developers or IDP clients. It's efficient because every improvement or feature that's made to the shared resource is a force multiplier that can be used across every single team that's leveraging it and because of all of this, it scales really well because by reusing previous work relating to setup and maintenance, we're reducing the amount of friction it takes for a client to get up and get started. Some of the questions we had to answer when addressing multi-tenancy with Argo workflows and Argo CD were how do we make sure that clients can only access what's theirs? How do we isolate client workflows and processes and deployments? And what's the best way to get updates and fixes out to clients for resources that we don't control? And of course, how do we design for reliability as load increases? So let's start with access for Argo workflows. This page shows the Argo workflows UI. It's exposed by the Argo server that's part of the workflows installation on the hub cluster and it visualizes client workflows that run inside client namespaces based on our architecture. We can see here that workflows are being returned for a specific namespace that we entered and also that a workflow is currently in progress. After clicking into a workflow, a client can view workflow tasks and logs as well as execute actions like suspending, retrying or even deleting workflows. And while this is great functionality to have, how do we make sure that clients are only doing this for workflows that belong in their own namespace on the cluster? So when a client wants to access a workflow, resource to the UI, whether or not they can access that resource depends on two things. The client's group membership as well as the presence of a service account in the namespace of the requested resource. So to achieve this, we use single sign-on and the workflows namespace RBAC feature. Let's go through this step-by-step. So after receiving a client's request to access the workflow resource, the Argo server will first get the group member information for that client through the configured identity provider. In our case, that's Azure Active Directory. The Argo server then goes ahead and checks whether a service account is associated with any of those groups inside the namespace of the requested resource. If it does, and the service account has necessary permissions to access that resource, then the access is granted. And if it does not, the access is blocked. So this access flow allows us to scope client access down to workflow resources at the namespace level. And it's using service accounts, so it's highly scalable since it leverages case-related RBAC through service accounts. This allows the many-to-many relationship between our clients and namespaces on the hub cluster. The namespace RBAC feature that comes with Argo workflows requires that the service account is annotated with the authorized groups that are allowed to leverage it for access to the resources in the hub cluster. You can see this in the example manifest here, which specifies that only clients in group A can use a service account for access. It's in a list by default, so you can add to those groups. There's also a precedence field, and that's mainly used to break ties for service accounts that might match, multiple service accounts that might match to a client. The Argo workflows command line interface, or CLI, is also exposed by the Argo server, but the mechanism for scoping access here is slightly different. This time, instead of an identity provider, the Argo server actually delegates authentication in our architecture and in the general Argo workflows architecture for SSO to the, sorry, for the CLI to the Kube API server. The client includes a bearer token in their request, which may be automatically included as part of an exported Kube config file locally, or as part of a token in the actual CLI request that's going out. The token is validated by the Kates API server, and if the token is valid and contains necessary permissions for the requested resource, then the request is granted. So here's an example of what a successful CLI looks like when I'm querying for my own workflows from the CLI. As you can see, this is my own namespace, and I'm getting back my workflows. And here's an example of what happens when I try to query for workflows in the CLI that I do not have access to using that flow. Now that we've covered how access is scoped or how we scope access in the workflows UI, let's take a look at the Argo CD UI and how it does it. The Argo CD UI visualizes applications that track and deploy state defined in Git. The goal here is that clients should only be able to view their own Argo CD applications through the UI. This is because access to Argo CD applications also provides access to the visualizations of the associated deployed resources, and it may also include access to syncing that resource or even deleting the underlying deployed state. So in the Argo CD, the UI and CLI use the same mechanism this time to scope client access for resources like Argo CD applications. Similar to workflows, these are exposed by the Argo CD server in this case. Argo CD also has other resources in addition to Argo CD applications, such as logs, connected repositories, connected clusters, certificates, as well as app projects. The app project resource is important, as we'll see later, because it can be used to scope or group Argo CD resources, which can then be used to scope access. Some, but not all Argo CD resources can be grouped together into an app project. Access to those resources can then be scoped to clients who have access to the parent app project itself. For resources that cannot be scoped to app projects, access is usually defined using a global config map. We generally don't use global RBAC in our architecture for client access because it can act as a single point of failure. It's prone to typos. On the other hand, the app projects are really great because they cannot by design be used to provide access for resources that don't fall under those app projects, making it ideal from an access scoping perspective. After setting up project scoping, a client can request access to a resource, and the Argo CD server gets the client's group information from, again, the configured identity provider, which again here for Argo CD is Azure Active Directory. The Argo CD server then checks if any of the client's groups are authorized by the parent project of the requested resource. If so, then the request is granted. If not, the request is blocked. This is an example of what the Argo CD app project that we use looks like. You can see that the project definition includes specified, authorized groups, and their associated role permissions over the resources in this project. So when setting up scoped client app projects, we usually set up three roles by default out of the box, an admin role, a read-only role, and an automation role for external Argo CD application access, for example, from the Argo workflows, right? So now that we've finished covering how access is configured for Argo workflows and Argo CD, let's go over how isolation is achieved for the respective tools, beginning with Argo workflows. So as mentioned, each Argo workflow runs in a pod, and they run in client hub namespaces. So far, we've ensured that clients can only access their own resources, but how do we make sure that they can't access other resources through their workflows, and how do we protect their workflows from other clients? First, we require that all clients bring their own secrets to the namespace. So this helps with workflow access, automatically scoping it through automatically scoped Git tokens, artifact credentials, Argo CD automation tokens, and any other secrets that we pull from Vault. Next, we leverage the lack of shared secrets to configure individual client artifact repositories, and this adds another layer of isolation for the artifacts that are passed between steps in a workflow. With regards to the executor that's actually running the containers inside the workflow pods, we allow the recommended emissary executor by default, as well as the Kate's API executor. The latter requires some network policies, but we lock that down by using default network policies to control for ingress and egress by default. When triggering workflows, we specify a service account that is not the default service account to run under. This defines the permissions the entire workflow runs under, or rather each task, and also helps us be intentional about what permissions we're giving to the workflow itself. We also have cluster level pod security policies that limit pod capabilities, and force default second profiles, and prevent privileged pods from even being spun up on the cluster itself in client namespaces. To accommodate this, we use Canico to build, and that's because Canico does not require any special privileges in order to build our client images, and we also use multi-stage bills in addition to that. So for clients that do require privileges, for example, Docker and Docker, we offer off cluster builds through code build, and this again helps to give them these privileges while maintaining the kind of isolation we want in our workflows. And this is also directly integrated with the Argo workflow step that we use. So the result of these layers is that client workflows and the underlying pods are isolated to the client hub namespace in which they're running, and this prevents their resource from being accessed by other clients, and also prevents them from accessing other clients' resources. With Argo CD isolation has to happen at the Argo CD application level first. So as mentioned previously, an Argo CD application is a CID that can be deployed to specific, that can be used to deploy to specific namespaces on specific remote clusters. The Argo CD application is also configured to track CATE's manifest from a Git repository, in this case, the Deploy repository that's owned by the client. In this situation, the Deploy repository might be referring to other repositories, for example, in Helm charts or Helm dependencies in Artifactory. So how do we restrict what client through their Argo CD application can deploy to and track? The answer again here is actually Argo CD app projects. In fact, the Argo CD app project has to specifically allow tracking of Git locations of the child Argo CD applications in order for those Argo CD applications to be allowed to track those. Similarly, it has to specifically allow deployment to namespaces on clusters in order for the child Argo CD applications to be able to deploy to them. If the Argo CD application does not follow these rules, then an error will actually be thrown when attempting to create the application or put it inside the project. And by default, all Argo CD applications have to be inside a project. Finally, we also restrict Argo CD applications from creating any cluster level resources like creating or deleting namespaces because that's not good when you're sharing a cluster with other clients. The other Argo CD component that we look at with regards to isolation is the Argo CD operator itself that's running on the hub cluster. So the operators connect to multiple remote clusters that it deploys through a hub and spoke model. And this means that the operator running on the hub cluster is connected and deploys to all of these possible remote namespaces on different remote clusters. So how do we ensure that we're appropriately limiting the permissions that the Argo CD operator is running with? Well, we need to secure it so that in case of a vulnerability, and in general, we limit the extent to which a client or a malicious actor can actually take advantage of Argo CD to interfere with other client resources deployed on the remote cluster itself. So Argo CD when registering a cluster actually first needs to have a service account that's on the cluster. And by default, this service account is created in the registration process in the cube system namespace with admin cluster level permissions. Argo CD is then given the token associated with that service account and then uses that token to manage and deploy to that cluster. Now there are two things that could be wrong with this picture. The first is that by default, the registration process out of the box puts the service account in the cube system namespace. Now as we mentioned earlier, the cube system namespace, or rather as we mentioned earlier, you may be using pod security policies to be preventing privileged pods from being spun up on your cluster. A lot of the time, best practice is actually to allow this service accounts inside the cube system namespace so that they actually aren't restricted by those pod security policies. If that's true for you, then this default setup will actually allow the Argo CD service account to also be allowed to spin up privileged pods on your cluster, which means that clients through Argo CD might actually be able to spin up privileged pods through the default setup. The second thing that could be wrong with this picture is that Argo CD by default has admin cluster level access, which isn't necessarily good because there may be namespaces on the cluster that have nothing to do with Argo CD deployments, and so Argo CD shouldn't necessarily be able to deploy to, edit, or even view those namespaces and the resources inside them. So in order to begin fixing these issues, let's start with the cube system namespace issue. So the cube system namespace issue, if you just change the namespace, then if you have pod security policies that restrict privileged pods being spun up on your cluster, then it will work as expected, and we've restricted Argo CD's ability to deploy privileged resources and by extension clients that use Argo CD. Now let's go ahead and change that admin cluster rule to a read-only cluster rule. So cluster-wide read access is still required by Argo CD, and this is because Argo CD does a bulk read of the API whenever deploying to or syncing to a remote cluster itself. It's, this check will actually error out if it does not have a cluster-wide read access, and now Argo CD will generally be fine with cluster read access, but it still needs admin access to the individual namespaces when deploying resources to. And we achieve this through individual rule bindings in each of the namespaces, give the service account right access to the namespaces themselves, and this limits the blast radius to just the namespaces that are supposed to be managed by Argo CD themselves. So we can do better actually. We can remove secrets, especially if you use some sort of vault integration to provision Kate's secrets separate from Argo CD on namespaces, so Argo CD does not need to read secrets at a cluster level. We can also remove secrets from the right access that it needs for namespaces itself, because again, Argo CD is not deploying secrets. As mentioned earlier though, how do we prevent the cluster bulk read from erroring out when it's trying to sync? And we do that through exclusions. We tell Argo CD, please ignore secrets on this cluster or on all remote clusters, and it will do that, and so you won't have any issues when trying to sync or any errors when trying to deploy to a remote cluster. So the result of this isolation of the Argo CD operator is that we end up with a more secure hub and spoke model, and by extension, clients, they can't do as much on the connected remote clusters that are shared between them. So we've explored access and isolation for Argo workflows and Argo CD, and I've seen that clients have various repositories that are involved, in line with the GitOps model that we follow. Clients have an application repository that's running application code. They also have a deploy repository that contains service manifests for the actual running service and case resources on the remote cluster, as well as workflow manifest for the CI CD pipelines that they're running on the hub cluster. So unlike the shared instances of Argo workflows and Argo CD, these resources are actually client-owned. So how does IDP help distribute fixes, updates, maintenance to these client-owned repositories that can drift over time? Well, the answer here is that we try to centralize what we can. And so for the application code, for example, we provide generation templates that can be used to help clients spin up more quickly in terms of their application code. For the CI CD resources and the service case manifests, we use our own repositories and we connect them because everything's a Helm chart. So we can use Helm dependencies to automatically pull in any changes that we deploy to our centrally managed repositories themselves. And we actually use semantic versioning in order to achieve this in a more seamless way. So Helm supports semantic versioning, which is a system where version numbers actually convey meaning about things like backward compatibility and breaking changes. Through semantic versioning, client-helmed charts actually configure to automatically pull in minor or patch releases made on the IDP-owned repositories. Of course, minor or patch releases, they have to be backward compatible, which is what makes them minor or patch releases. And here are some examples of what such a release would look like. The first is a patch that moves from 1.2.3 to 1.2.4. And the second is one that moves from 1.2.3 to 1.3.0. Both such releases made for IDP, Helm charts on the right, would automatically be pulled into the Helm charts on the left due to semantic version integration. This is powerful because it allows us to get critical security updates, fixes, best practices out to clients immediately, provided that these are all backward compatible. Now, what do we do if a change that we would like to push out to clients is not backward compatible and contains breaking changes? Well, semantic versioning allows us to do this through the major version releases. This requires clients to manually bump the version of the IDP Helm dependency that they're using inside their own repositories. We usually try to include release notes on the breaking change, and once clients make the necessary adjustments, if any, to accommodate these changes, they can immediately pull in all the work and fixes that we've made inside our dependency that we control. Here's an example of a major release change and what it looks like, moving from 1.1.0 to 2.0.0. And here's an example of what a client's chart.yaml looks like that shows how we use Helm dependencies with semantic versioning. We can see the name, version, and repository that the IDP-owned Helm dependency here corresponds to. We can also see a carrot to the left of the version number. In semantic versioning, this means please keep me updated to the latest compatible version, which roughly translates to what we're trying to do is please pull in all minor or patch versions, but don't automatically pull in major versions. Let me do that. This is what it looks like in a client's repository. So as we build out multi-train architecture, how do we prepare for heavy load that may appear as we scale up? Well, for high availability, the Argo workflow documentation is actually quite great, and it details improvements such as pod disruption budgets and minimum replica counts for the Argo server, and this helps generally improve the components inside the Argo workflows installation. As we onboarded more clients running workflows in different namespaces on the hub cluster, we noticed that it helped to mirror common images that were being used across the workflows in the hub cluster, and this was because if all the workflows are pulling in images at the same time, we quickly hit Docker hub or rate limits in general. So mirroring them to our own internal repositories was quite useful for us. We also increased the workflow executor's default resource allocations for CPU and memory since this was limiting the size of the artifacts that clients were passing through or between workflow tasks themselves, and we also increased the default main container resource allocation to accommodate more resource-intensive workflows. The main container is the containers that's actually running the task. On the other hand, since we've removed or increased these limits, we also needed to be able to limit what a client workflow itself could consume on the cluster, and we did this through namespace level quotas that limit the impact a single client can have on the resources being shared by all the clients on the hub cluster running workflows. Argo workflows provides Prometheus metrics relating to workflow controller state and also provides support for custom metrics at the workflow or template level. Documentation on the default metrics expose and how to create custom ones, as well as example, Grafana dashboards are available online as well, and they're great. We encountered similar challenges when scaling Argo CD, but for different reasons. The Argo CD documentation has well documented, high availability configurations settings for each of the Argo CD components, and also conveniently packaged HA release itself that we actually use. As we onboarded more clients, we switched to using GitHub apps to register repositories. This was both due to the fact that GitHub apps have higher rate limits than generic users or personal access tokens that you may be trying to clone from, for example, when Argo CD is tracking state and Git, and also because it actually reduces client friction. Instead of having to generate a deploy repo, clients don't need to do anything except install the GitHub app on their repository, and we already have the GitHub app credentials, so we can go ahead and use that to clone their repository. So that's one less step for clients as well, as we scale up. We also looked into issues like limiting the ability for an individual client to be able to overwhelm Argo CD resources, for example, with a very large repository for the Argo CD repo server. And we also looked into sharding Argo CD operators across different hub clusters in order to help distribute the load as we start scaling with more clients. Argo CD as well exposes various Prometheus metrics that are well documented, as well as for the different components, such as the API server, as well as the application controller operations. And these are great for monitoring and they also include examples of Grafana dashboards online. So here you can see our own Grafana dashboards that draw on the metrics available for Argo workflows in Argo CD, and these really do help as you scale up because you can catch problems early as you debug them. So to recap, multi-tenancy in Argo workflows and Argo CD have similar challenges, but are addressed differently because they're different tools with different features. We saw how these existing features are leveraged today at Adobe with GitOps for a multi-tenant architecture that scales well with more clients, and keeps clients isolated from each other, provides them with scoped access to their own resources and helps us maintain a distributed support framework that helps clients continue to accelerate. And this generally supports the ability of the internal developer platform at Adobe to create a CI CD experience that remains flexible for different use cases and represents a force multiplier for all of our developer teams. Thank you for your time. Please provide feedback, thank you. Any questions? Feel free, yeah. Hi, thanks for the great talk. Just a quick question about the authentication and Azure AD. Yep. I think in your diagram, I caught that you just showed the user presenting to workflows a bearer token when they log in. I was curious, how does that process, the authentication process work? Where did they get that token? How is the single sign undone? Yeah, let me just actually go ahead and just find that real quick. Yeah, so the token itself, so that's for the CLI acts. Oh, the question was, where do users get the token in order to access the Argo workflows CLI? Because we showed the, this is, yeah, because this is the UI, but we showed the perspective from a client's perspective. So where did they get that token? So two ways. They get it from their Kube config, which the internal developer platform provides, and that's one way. So they have this Kube config that's exported locally and that's present. So the client is even aware of this. They just go ahead and run Argo commands. It automatically uses the exported Kube config and it just works for them. That's one. The other way is if they actually have any bearer token, if they have a service account token, if they have any Kate's token, they can actually include it in the CLI request as an additional flag, right? But usually the first use case is the more common one because most of our clients have Kube config, YAML files available that are exported locally that allow them to access the cluster itself. They're a scope to their namespaces. Yeah, great question. Thank you. You mentioned that Argo CD doesn't create the namespaces. What creates the namespaces and how do you manage those? That is a great question. It does not. So we have a tool as well as another team itself that sort of helps us standardize namespace creations. It's a provisioner tool and that sort of integrates with Argo CD as well and the GitHub GitOps model itself to create and manage those namespaces so that they're isolated from clients. That's a great question, thank you. So I've seen people use the Argo CD operator as well as like a management cluster pattern where you're using one Argo CD instance to deploy to multiple clusters. I'm kind of wondering what your options analysis was and maybe why you switched or why you use the Argo CD operator because I think it's a little bit less common. And if it's a performance like related issue if you can maybe say roughly like where you hit that performance sort of bottleneck and made the switch. Yeah, so just to clarify, you're referring to having an Argo CD operator in the hub cluster in its own namespace versus a separate one for each of the clients, right? That's what you're talking about. Yeah, exactly like a single Argo CD instance doing the deployments versus multiple. Yeah, so I guess the thinking here was we would evaluate that as we scale and see if it's necessary. So first we wanted to start out with seeing if we could shard across multiple hub clusters which is slightly one step lower than what you were mentioning and see if that could actually scale well. But I think probably you were also bringing up because of the security concerns that they have on the security blog about RBAC not being a silver bullet and stuff like that. But generally I think when we were building this out the main consideration we have was how do we help consolidate maintenance and best practices and simplify things for clients as while providing a secure experience. So because we had the RBAC and because it provides secure experience and it simplified things. So that's a great question. If I can extend just a little bit. I think one thing I've seen is people looking at trying to get a single pane of glass for sort of deployments across like an SDLC. If your setup's not like that, I've seen setups where you have a dev cluster and a product cluster and the same versions of applications deployed both. So if you have one instance, then you have that but I'm wondering are you tracking applications through a set of clusters or are they kind of orthogonal? Sorry, what was that last part or are they kind of what? Orthogonal, completely separate. Yeah, so I'm not sure if I caught the entirety of the first part, sorry about that but all the ArgoCD applications are technically living inside the installation names based on the hub cluster. So it's being tracked by the ArgoCD operator on the hub cluster but those applications of course are deploying out to remote clusters themselves. We can also talk offline because I feel like I might have missed something important in the first like sentence or so and I'm sorry about that, yeah. Yeah. A great presentation, Srinivas. So I have a question like you create the namespace by different tools. I think you can use a crossplane but right now we use this terraform and we have a resource quotas on the top of namespace but when you deploy using the Argo workflows or the task basically, I mean have you ever seen like any problems you hit the quotas and all? Yeah, that's true, we have. So that was part of why we had to modify the resource allocations out of the box for the workflow executor. So both for the main containers as well as the executor itself. When you modify the resource allocation for the executor that automatically trickles down to the init container as well as the wait containers that are sometimes affecting whether or not the workflow task will run properly. And then the main container itself is just where the task runs. But that's part of why we did that. The namespace level quotas that we enforce are configurable so teams if they think that they're running very heavy workloads they can actually come to us through a process, a process in place for them to increase that. But we have seen issues and that's why with the workflow level resources themselves and that's why we had to change those limits to just accommodate. Okay, I'm yet not an Argo customer but are there any specific reason that you are installing not utilizing the SES platform for Argo? Sorry, again the last three words I think I just. I'm saying like you are installing Argo CD in your cluster. Yep. I mean, are there any limitations in the SES environment for the Argo which? Well, I think I touched on the provision earlier. We just, it just gives us more granular control over how we manage certain parts of our architecture I think. Again, as we scale up I think we're still looking to grow. We have a good multi-terrain architecture right now we're satisfied with but depending on the needs of our clients and as you can see we have a lot of different use cases we might very well change based on where that direction goes. But for now, I think really we wanted flexibility. That was the main point, yeah. Okay, yeah, okay. I'm getting the, feel free to come up. I'll be in the hallway. It's lunchtime. Thanks for attending.