 Welcome everybody, thank you for joining us. You are the Friday crowd. You're the Friday crowd. You stuck it out, you didn't go to the airport, so give yourself a round of applause already. Great job, yes. Time to play fights. Yeah, so we're gonna talk about how Adobe planned for scale with Argo CD, Cluster API, and Decluster. Joseph, you wanna introduce yourself? Yeah, I'm Joseph Sanibel. I am with Adobe, which is just in the title. I am now a product manager in our Dev Experience group, which really means that I'm still working with the underlying Kubernetes infrastructure, and then in this alternate life, I get to hang out with Kubernetes SIG release team, and part of like five release cycles, branch release manager associate, still trying to retain it because that's one of the things I highly value. Yeah, and I am Dan Garfield. I am the co-founder and chief open source officer of CodeFresh, and I also feel like I live a dual life because I'm also an Argo maintainer, and then I helped start the GitOps working group and open GitOps as the GitOps standards that are a CNCF project that I've been very pleased to see a lot of pickup on a KubeCon and lots of people's lines. So we're gonna talk about Adobe and how you're scaling. Yeah, so a lot of us are familiar with the brand, but that's just like one aspect of what we do at Adobe. We have an analytics area. We are very big in AI and ML and improving our products with these types of developments. And there's also other areas like marketing, and so there's quite a bit of a portfolio that Adobe has. And so if you've hit up any of the talks this week, you probably, there's a couple of people here who have been speaking about what's been happening there, but you may have heard the name Ethos, but what Ethos is, it's the cloud native platform. If you're gonna do anything there, that's the spot it's gonna happen at. And this platform has existed for about five years. And so a lot of change has happened, a lot of developments. The interesting things about it is that we've scaled. And so the cluster counts have grown. It's dynamic. We have about 230 clusters, roughly 18,000 compute nodes, 1.8 petabyte of memory and about 500,000 CPUs. Along with that, the project is growing. I think the last count was 2.3 million. And along with it, we were a company of about 26,000 and you can imagine a lot are developers. So the great thing is we're scaling, change is happening there, and it's exciting to kind of see. And we're pushing things as far as it goes. But with that though, that also presents some challenges for us because a lot of us know who are running platforms. And when I refer to the platform because we have quite a few teams who are doing a lot of different things there, the team I'm on is really foundational. And we really care about like providing that bedrock that the teams that are building like the developer experience or the paid path or the teams that are like these large scale, dedicated clusters that they have the space to be able to do the things they need to do. So we get a lot of requests. And some of them are, hey, we've got an environment that we wanna be able to test our changes. We want femoral clusters. We want these things to spin up on demand. And a lot of us from our side of it are just like, that's challenging. How do we kind of do these? Because we have some pressures. We wanna keep on on the cost. Some of the other things that I'm starting to see emerge is requests to really push out more clusters. Push compute out further to the edge. Find ways to support a lot of the use cases that are gonna take our products even further. As well as like just getting out of the way. Like really just enabling teams to, some teams wanna go fast. They wanna be able to like be on current release versions. Other teams wanna be more guarded. And we're managing this all the while while we have dynamic times of years where things have to scale while we have holiday periods. And so we really have to create this layer. And this is where the challenge comes in when I have the team that I'm working with is that how do we create that separation or that sensible layer because we have challenges as well. We're trying to keep the fleet current. Trying to keep that fleet consistent. We have components and all these things that make up that underlying platform that really provides it. It's not just Kubernetes. We have tooling that's around security, cost efficiency. As your scale, we have to keep an eye on cost. It is, it is. All right, I think I hear one. Here we go. I hear something. There we go. I think I got it. No, no, no. Maybe not. Let's bring it up. Let's give a little backstory because I mentioned a little earlier that we had been around for five years. And with that, there's been a lot of change. And so this kind of lays some of the context of our current challenges. They are made of extra repository. It was very reflective of it. I think we're good. All right. These patients with this, as I was saying, we've been around for five years. And it was very reflective of that start, as I was mentioning. They started off with Mesosphere, which then evolved into Kubernetes. And so a lot of the way that they built and deployed on Mesosphere influenced what happened afterwards, meaning they had a very tightly coupled get repo that it mirrored the structure and the org at that time and how it worked was it was barely tightly coupled. That was great. It scaled us up. We were able to keep a cadence of weekly releases. But then as growth started to happen, things got a little bit slower. There's obviously some challenges because not everybody's on the same release schedule or cadence. But over time, when we pivoted to Kubernetes, well, scale was a little bit different than Mesosphere and how you design things. But the team was able to kind of keep the train going and things were working. But over a few more years, this is when things started to get a little bit more impedance. And this is where we had to start really thinking about where we were headed. And that's because of that tightly coupled nature of how we were building and deploying releases that all changes needed to be reviewed by an all-knowing maintainer. Even though you may have this service that's way over here that you know it very well, but because it touched the core, we needed these individuals to be able to make sure that it was not gonna cause any implosion or any friction or cause a release to have a problem. So that system had really served its purpose, but now it's time to evolve. So we had this idea and I think we were really kind of looking at that time and it was still early in and I think a few of us were looking at the Argo community and we saw some areas where like, hey, there's some challenges there but it looks interesting. But we thought, maybe we could iterate on what we had at that time. Maybe that's the fast way to kind of like unblock ourselves in a sense kind of like decouple releases. And that was our next attempt and I think, Dan, this is kind of about the time I met you. I think we were talking about this where we overlaid Argo workflows and really to do that, we really had to create a workflow of workflows which then made things maybe a little bit more complex, not as like easy, but it did unblock the trains of releases to individual clusters. So that's a good thing. But the challenge of maintainership was still there. A lot of the features that we had to start thinking about was kind of like, well, wait a minute, are we kind of re-implementing something that's out there, which is Argo CD? But we went with it. It was kind of a lift and shift approach. This is kind of what we ended up with, really taking Argo workflows with the traditional CI city approach. I think, Dan, I went to you. I don't know if you made a face or not, but I did say, hey, Dan, what do you think of this idea? Mainly I was just like, hey, scale. How far can we scale? Because I'm looking at the existing fleet that we had running and I'm looking at where we're going to market and where we need to be. And I was like, I see the same down the line, we're gonna hit that same wall. I know you had some thoughts on the traditional approach. You know, we're very used to, and is my mic out? No, you're good. My mic, okay. So we're very used to using CI CD as DevOps teams. We've been using it a long time. We build a pipeline. Like, how many of us have built those pipelines literally 10,000 times? Like, I'm gonna run a build, I'm gonna run some tests, I'm gonna deploy something, I'm gonna run some additional tests. Maybe I built some logic to do a rollback if I'm getting fancy. And for the use case that you needed to support, which you clarified so many diverse use cases, you needed a really complex workflow to support that. So you had a workflow of workflows so it was intelligently designed, but that also meant that at the end of the day it was very complex. And we all know that in CI CD, it's very, I mean it's an imperative operation. And so creating idempotency inside of a CI CD pipeline is really difficult. I said deployed, did it work? Do I need to add a retry step? And you get into doing retries and these kinds of things and it's not event driven and it's just because implementing idempotency is really hard. So this approach that you had built, it got you from point A to point B, but you were also looking at point X down the line and thinking, how are we gonna scale this? And then we've got Mike Chudron on the team who can manage this, but we don't have 15 or 100 of him. So how's that gonna scale up? And so this is when we started talking about GitOps and I tried so hard not to make a face because I was like, this is cool, this is great. Maybe some things to think about. So this is the approach was, well, we're doing CI CD, let's just extend it, right? We talked about some of the issues with that and how you kind of realize that you're implementing a lot of the features that are actually built into Argo CD. There's a lot of logic in the Argo CD reconciliation flow and those controllers and how they operate to create idempotency, right? To create and simplify the operation. So having to rebuild all that logic yourself, it just didn't make any sense. And it also, because of the way that workflow, workflows and CI CD work, it's fairly rigid and it's more difficult to create these self-service operations. So this is the point when we decided, okay, let's get into a GitOps model, let's get into Argo CD. Now, I think most people at this point are familiar with OpenGitOps, OpenGitOps.dev, the GitOps principles. Nobody is familiar, it's like a handful of people. Okay, so there's a couple, all right. So this is, go to OpenGitOps.dev to pull these up. Go ahead and do it on your phone. There are four principles that go into GitOps. And this involves how we manage state, our actual state and our desired state. And I'm not gonna go through all those principles right now but you should read them and potentially memorize them because basically what we're trying to do is keep track of our actual state and make sure that there's no drift. Drift is when something changes in our state that was not defined in our source of truth. It was not defined in Git. And I talked to a lot of people who, you're like, hey man, I've been using Terraform for five years, I check it into Git, I apply my plan, I feel like I'm doing GitOps. Smells like it. Smells kinda like GitOps. And you're at least halfway there. I actually talked with somebody recently who told me that they went to actually check all their Terraform and they found that over half of their Terraform plans that were old had drifted. And they actually couldn't easily reconcile. So that means that things were getting changed in production that they weren't aware of. And that's shockingly common. So the GitOps style is that you have a source of truth. That source of truth is Git. In the years of past when we're doing CI CD pipelines, a lot of times our source of truth is sort of inferred by the operation that's happening in a pipeline. But what happens when a step fails? What happens when a web hook fails? What happens when the state has changed and you can't detect it? So you're trying to push changes onto something that's not what you expect. All of these situations are the reason that GitOps was created in the first place. So in your case, one of the things that we wanted to do was handle pull request generation. That's right, yep. And that's because with GitOps, the source of truth is Git. So generating pull requests is important. And then generating things off of pull requests is important. Makes item potency easier. And then you also needed to handle isolation because you needed to be able to test operators. Yes, we can't give everybody full permissions on that cluster. So I think that was a big request. Hey, I have a CRD, I want to test these things, but we want to give it to them in a safe manner. Yeah, when you have an operator and you go to upgrade it, it's tricky to test, right? And in your case, you're supporting not only multiple clouds, but multiple Kubernetes versions. So having that all operate together in a GitOps fashion would be very valuable. It is, yep. So a couple of key tools for this. Yeah, so I think the, is in the talk. Of course, I named off all the names that I love here, but I'll start with vcluster. I think this one came on my radar like last year at LA, KubeCon. Lucas gave this great talk and I was just like, wow, this is kind of neat because all of a sudden now we can spin up a virtual cluster that runs in an A space. As far as our developers, they feel like they have full API access. They can do whatever they need to do. But yet we can make sure that the host security is not, is enforced. It's really easy for rapid provisioning. So it was just like a natural fit for that, you know, hey, I want this quick ephemeral thing. I want to test this thing. And we want to be able to just let people kind of test as often as they want without breaking each other. Yeah, now those vclusters, in this case, we're going to be defining them using cluster API. If you're not familiar with cluster API, cluster API is an operator that allows us to take manifest describing vcluster or other cluster resources. And cluster API will make sure that those are spun up and reconciled. So because all of these things are essentially just Kubernetes manifest, they're just Kubernetes resources, there is a tool that is designed perfectly well for synchronizing those. And it's called Argo CD. Hands, who's familiar with Argo CD? Okay, most people. So the GitOps thing, people were like, I don't know what that is. But Argo CD, everybody's got it. So within the Argos project, there are four main tools. There's Argo CD, which all of you are familiar with, which is great at reconciling resources, you define a Git source of truth, you define a destination, and the reconciliation engine takes care of the rest. There's Argo workflows, which is a general purpose workflow engine for Kubernetes, and allows you to run any kind of workflow every step operates in its own pod. It's very popular with data pipelines and ETLs, but it's also incredibly popular for doing workflows, scripted events that need to happen. And then there's Argo events, which is for eventing and triggering these things. And there's Argo rollouts for doing progressive delivery. Yeah, so all of these things are great. I would say one note about cluster API as well. We're in this great time where a lot of us didn't have a lot of that API for infrastructure. I think it was always been a cloud-native experience for developers, but as operators, we always kind of like seeing the trail behind and having that and then seeing the maintain of our adoption over the last year was the confidence buildup for me. Anytime I see these projects like reach that inflection point, it's a great time to jump on board. Oops, go back. So this was kind of where we landed at. This is the architecture that, right slide? Yeah, you're good. Okay. This is the architecture that we landed at. It was the one that in this use case and we wanted to target that pull request environment was the kind of like use case where a developer opens a PR and he wants to test the operator or application that we need to give full access. And this application is where the magic of kind of building something is where the application said it's there to the left, upper left is where the magic kind of happens at. And so we use this PR generator to generate like a B cluster in next example with the support of cluster API. And the workflow is pretty basic. It's really just the event source is looking for this new cluster API resource. The Argo sensor picks up the event and sure goes a new Argo workflow. And then Argo works for the new cluster or B cluster to be fully created and gets the config and all the other things that need to be in that cluster into Argo CD. I think I may be in the wrong spot, I think. Yeah. So Argo CD from there really syncs generates all the resources to it. You know, whether it's a Prometheus operator and English controller, you know, all the things that make it that unique property. And then all of a sudden your bootstrap you got this new B cluster. So for us that was like the really inflection point and the great thing about this approach is that it could also be leveraged into regular clusters as well, which we also wanted to do. So this was kind of like that stepping stone. I don't know, V8 Dan, you probably run something similar. Yeah, this approach is just so impressive and strong because it means that you can test those operators against multiple Kubernetes versions and you can do them in self-contained pull request environments that are generated via application sets. A lot of people are familiar with application sets as a way to generate applications in Argo CD, but within application sets we have this concept of generators. And generators include things like a pull request. So all your pull requests can generate applications. All of your Git repos can generate applications. A folder structure, a list, any of these kinds of things and they can be used in combination with one another in what's called a matrix generator. So this environment also allows you to save a lot of money because running a V cluster is ultimately incredibly cheap compared to running an entire cluster. There's a great preview, a prequel to this talk, if you will. There is, there is. Yeah, it was done the other day by Mike Tujron, who's in the audience. If you really want to get into a deeper dive, it's not posted yet. I expect those talks to come up here in the next few weeks, but it's hundreds of clusters sitting in a tree with Argo CD. It's really kind of like a lot more in detail. You can kind of find the patterns that you may be able to leverage yourself. I think it also has a repo that you can pull from as well to see how all this magic happened. But this talk, yes, it is about some of the things that are in my title, but the key thing that I always have my eye on is scale. I have an existing fleet. I know with how big this fleet is gonna be and where it's going in one to three years and I'm trying to figure out, all right, we built this thing, we've had scale things in the past. This is where I'm gonna set you up to hit it out of the park. I came again and this is a true, this is seriously, I was kind of like, what are the scale points? Like, does anybody have any baselines out there that I can pull from? Because I was just looking out there, trying to figure out like, how do I communicate to the team our plan of scaling? And so I think that's when you just, you were like, hey, let me, let me look at this, how can we scale this? Yeah, who's read the official benchmark report from Argo that tells you when you need to consider scale components? Nobody? Oh, because it doesn't exist, right? So that was a call to action because hopefully I'll write one coming out of this. Henrik raised his hand because he's like, he's like, dude, we like wrote the book on scale. He knows, he knows what he's talking about. But so scaling with Argo CD and with GitOps, the approach that Adobe is taking takes advantage of a lot of the structure that GitOps provides and directory structure and pull request generators from Argo CD and those kinds of things. So it has really great primitives for scale. But when is Argo CD going to run into performance challenges? What are the security considerations that we need to make? And then are there any organization structure considerations that we also need to take into consideration? There's a blog that I put together about this called Scaling Argo CD securely in 2022. This is a blog that I try to keep evergreen with what's happening in the community and anytime we learn something new or anything like that. And it talks a little bit about some of the approaches that we're going to talk about today. So one of the main considerations as people are planning to manage a fleet of clusters is if you should go with the hub and spoke model, that's one Argo CD instance managing many clusters and the other is a per cluster model. Should I have an Argo CD instance in every cluster? This is especially popular with edge deployments where a cluster needs to be able to operate independently with potentially bad networking conditions where things might get pulled offline or whatnot. And a per cluster approach is also favored if you're doing an air gap instance. So if you have somebody that needs to walk a repo update with a thumb drive to some secure site or something like that. So both of these approaches have their pros and cons. They both have their use cases. And sometimes I hear people debate them in terms of which one is better ultimately, which doesn't really make sense because it's really based on the situation. But at what point would you need to split up a hub and spoke because you might need multiple, right? It's not just a matter of having one because in Adobe's use case, you've got thousands of developers, you're onboarding more, and you have these additional partner development teams you have to support. So there's not gonna be a situation where you're like, we're just gonna run one Argo instance for the entire company, right? That's not gonna happen. So to benchmark Argo CD and first of all, there is a really excellent documentation guide put together in the doc. There's excellent documentation in the docs folks. And it's in the operator manual under high availability and it goes through and explains all of the components of Argo CD and how they scale. And it also explains why you would use a number of these things. So what we want to do is look at these use cases but then also compare it against real benchmarking data. So to do this, there's actually a tool hidden in plain sight in the Argo CD repository. There is a folder called hack. And under hack, there are a number of tools that we use to do benchmarking and performance testing of Argo CD. So when we make changes to Argo CD part of the automation, we actually spin up resources and check to reconciliation times and things so we can just make sure that performance isn't impacted by a particular change or we can understand that change. So under the hack folder, there is a tool called generate resources. And you can see an example of it running here. And you can specify generate X number of applications, generate X number of clusters, X number of repositories, X number of projects. And each of these points hits on a different component of Argo CD's scale. So what it does is in this case, this would be set up to spin up 200 V clusters and then assign 7,500 applications randomly across those clusters. There's another tool within hack called simulator. Simulator is essentially a chaos engine. It simulates a really bad set of developers who are going around and deleting stuff. And so you can spin up simulator and it will go around and mess with applications and trigger reconciliation and synchronization. So you can see an example of me running it here onto the side and for planning for Adobe, we actually wanted to scale up and find the breaking points up to 10,000 applications and up to about 200 clusters. So you can see the plan here. Now with Argo CD, the main thing isn't applications number, it's actually the number of objects that are being managed, Kubernetes objects. Because for each object, we have to reconcile that from at CD and that takes memory and it also hits the Kubernetes API and so this takes additional overhead. So this was the testing plan and here's a screenshot of Prometheus reporting over 10,000 applications and going through this exercise, we were able to look at all the different breaking points that we might encounter. Now I'm not gonna go through all of them. Now, when you install Argo CD, there are two flavors. There's Argo CD and there's Argo CD HA. There's actually more flavors than that but Argo CD HA replicates Redis and provides the replication capability and scale capability that we're gonna be wanting to look at. So some of the things, for example, that I ran into, once I got up around 7,000 applications, I started to have issues just with my browser loading the dashboards properly just because there are so many objects in memory that my browser was having a hard time keeping track of it. As I moved up from 1,000 to 2,500 applications, I noticed that my mean reconciliation time started to creep up and my reconciliation queues started to go up. Now this is very solvable. There's actually settings in Argo CD that you can use to actually solve all of these problems. So my reconciliation queues increasing means that it might take a long time to get back to an application to synchronize it. So as I increased, you might get into a situation where you're waiting 10 or 20 minutes for an application to synchronize if you haven't tweaked or managed these settings properly. So this gave us a good baseline of some of the breaking points that we might hit. So this is a very incredibly conservative slide. I would view this as a potentially controversial slide. I feel like some of the other Argo maintainers are gonna come and yell at me about this afterwards, so we'll see. So this is very conservative because if you're planning to support 40,000 developers or something, you just wanna have a benchmark in mind of what should I expect where I just don't have to touch anything and I'm just gonna know it's gonna work and at what point do I need to start thinking about that scale aspect? Yeah, so that's a pretty good starting point then, I guess, if you're out of the box. Because like I said, I'm trying to scale but cost is one of the things that I'm thinking about as well. So from that angle, I think it's always a better approach to take that conservativeness and then just grow into it as you kind of get some metrics and modeling like hey, this is what our patterns look like. Yeah, the 1500 app line here is really based on the fact that most people have more than seven objects in an app. They mostly have 100 objects or 200 objects, so the object count is really the driver there. 15,000 objects, you're gonna be very safe, you're gonna feel very comfortable, you're not gonna be really running into any issues. 50 clusters, you're not gonna have any issues. If 200 developers are running around and doing stuff all the time, you're not gonna have any issues. So this is a very safe benchmark. Now with tweaking, we can go far beyond this. Now, when I talk about going beyond this, I'm not talking about catastrophic failure. I mean, when I was running 10,000 applications, they were synchronizing. Sometimes it would be slow, sometimes it might crash and restart but it would pick up and run again, so it was fine. So it's doable, it's just a matter of tweaking it so you actually have a really good, reliable experience. So this is the part where I get kind of curious about because it's an area where I'm kind of challenged at. I have workloads that require isolation but thinking about how much do I have to, how many Argo Studio instances do I gonna need to kind of support these environments? What would you find like the security side of it? Yeah, one of the interesting use cases that you have is that you need to support not only multiple VPCs, which means I'm probably gonna have multiple Argo instances because I don't wanna punch through firewalls in most of those cases, they're VPCs for a reason. You also have these on-prem instances, you also have edge clusters, so those are gonna take into consideration potentially additional instances and then you also are supporting partners. Now the partner one is an interesting one and it's one that most people aren't aware of. Argo CD multi-tenancy is very good and we have done an incredible amount, we started SIG Security about 18 months ago and we've been focused really heavily on security within the Argo project and we are actually one of the first projects to pass 100% on the CNCF sonotype metrics that was accomplished last week and they came and gave us a big badge at our booth and stuff and there was lots of high fives but the multi-tenancy should be understood in the context of an organization. You are with Argo CD, you're telling people add a repository that contains arbitrary code, execute manifest generation, that could include customize, which is referencing additional external repos and it also includes things like JSON, which is like a full programming language. So you're saying add arbitrary code, execute arbitrary code on our space. Now we've done a lot of work to make sure that it's very difficult to do things like grab secrets from the cluster or output those things and there's a lot of guardrails to protect that and so it's considered very safe within an organization for you to use. However, we would not put an Argo CD instance out on the internet and just say here's multi-tenancy, everybody can go deploy to our namespace and this is actually something that came up for us at CodeFresh because part of our offering includes a hosted Argo CD instance that is on every account, every account gets one and so for us, we actually also took the vCluster route. We spin up a vCluster for every new account and we put Argo CD on it because while the RBAC is excellent, we don't consider the RBAC and as a project, we don't consider Argo CD safe to use in a public multi-tenant environment like that. So where you have partners, that's gonna be an impact. Conway's law generally prevents a lot of people from running into these issues early because one team is spinning up an Argo CD instance another team is spinning up an Argo CD instance and as many organizations mature, they're gonna start thinking about okay, how do I make sense of all of the instances that are running? This is a good time to start thinking about something like a control plane. In the interest of full disclosure, this is something that CodeFresh sells so I'm not gonna make a big sales pitch to you but it allows you to manage and track versions of all of the instances. You can have a single UI for all of your applications and all the clusters. You get better security isolation because the cost of it splitting up instances is lower and then you just have better resiliency and lower blast radius. So it's something to think about as you expand up. Some other considerations are most people move to a two repo model. Once they use Argo CD, that's something we recommend where you have application repos over here and then you have your GitOps repo or GitOps repos that is defining what should be deployed and that oftentimes means that multiple teams are operating in the same Git repo which is not something that organizations are used to and so that's when something like a code owner's file becomes very valuable. Code owner's files allow you to set permissions within a Git repository based on folders. So you can say this team can deploy to, can add stuff to that folder, this team can add stuff to that folder and that's something that most organizations aren't familiar with. Open source organizations are very familiar with it because we've been using it for a long time but it's something to consider and then also getting GitOps certified, there is a training for this that you actually certified. I did your first one, the second one, I think I got 93 right, I think the, 93%. I'd like to see you. Excellent. Most people have to retake the second one. I didn't tell you how many times I did. Four or five times. So this is currently free at learning.codefresh.io so it's worth grabbing, it won't be free forever. So what does all this mean for Adobe? If every pull request is generating a cluster, it means that we're gonna need to replicate the Argo CD app controller. This is a built-in feature of Argo CD to help you scale. It's basically a stateful set that is going to divide up and shard how we manage all of the clusters and the reconciliation. For the scale that we're looking at over probably the next year, this is gonna be very sufficient for what we're gonna be doing. As we go beyond that and we're supporting potentially above thousands, potentially getting up to like the 10,000 different environments range, we might wanna think about actually splitting up instances into multiple at that point. If every cluster generates a PR and we also have an app per cluster, we need to keep an eye on object counts. If you try to run a thousand objects on one application, you're probably gonna have some reconciliation challenges. So to solve this, it's good to use an app of apps pattern where you basically split up the application into multiple components. And this has other advantages because you can do things like say, this app synchronization needs to be complete before this app synchronization is complete. So you can actually do some dependency management if you need to. Ideally with get-ups we try to avoid those, but this is the real world. So who am I kidding? The other thing is supporting these partner orgs means you're gonna have to split up Argo instances because you're gonna want that better isolation and prevent any noisiness going on. Yeah, I think Conway's Law is helping me right now, but I do see there's some opportunities for us to kind of rethink where we're going with it. Yeah, and so that might be a point to consider something like a control plane. Yeah, so this kind of really means that, we kind of have a plan, we have this fleet in motion, and so looking at these numbers, we know there's some opportunities and a lot of strategies that I think a lot of us can kind of leverage and take from there, but as far as where we're going with Argo CD and that infrastructure, at least there's kind of a plan to know how to scale this. And trying to do it, like I said, cost efficiency is always a thing as well and security. So keeping those in mind. You may not know this, if you've been paying attention, Adobe has been speaking at this conference, most of the conference, this part, this talk is actually part of the greatest KubeCon quadrilogy of all time. There is an Adobe Cinematic Universe happening at KubeCon. Is C-Grips the prequel? The C-Grips is the prequel. Well, there's a whole prequel series from GitOpsCon. So there's even a prequel from there. So check out the extended universe of Adobe Talks. We've got a whole bunch of resources that I'll put these slides up. These will be uploaded into the scheduler by the end of the day and I'll tweet a link. And then you can follow me at today was awesome. Yep, Claude Takeru. And if you have any questions, we'll be in the hall. Thank you so much. Appreciate it. Thank you.