 All right, we are live. Everyone welcome to get off Skype to the galaxy. My name is Chris Hernandez. As you see, we are missing someone today. We're missing Chris Short. So unfortunately, I don't have a cool intro that Chris Short does. He's having internet issues. So we hope to have him back soon. Hopefully, a Comcast can get out there at one point this week to help him out. But we are here. I am actually really happy to be joined by two people from Intuit, right? So we have Hendrik and Alexander. I won't butcher you guys' last name. I'll let you guys introduce yourself from Intuit. And we're here to talk about Argo. Just like Argo CD, the Argo project in general. Get some cool background about how Intuit is using Argo CD and how they brought it into the open source community. So Hendrik, I'll let you introduce yourself first. And then we'll go with Alexander. OK, thanks, Christian. It's a pleasure to be here. My name is Hendrik Blix. I'm a product manager at Intuit. I think my official title is product manager of platform and open source. So I'm the product manager responsible for Argo. So I deal with a lot of regular product management issues in how we use Argo internally, what features our internal customers need. But I'm also working a lot outbound with CNCF, with our partners like Red Hat, and figuring out how we take Argo and Argo projects to the next level and how we work with the community to do that. So I've come from a background in OpenStack. Please don't hold that against me. Sorry, we'll protect you. We'll do that. There's some mixed feelings when you're in a cloud native world is, say, OpenStack, there's some very mixed feelings. But I had some good years in OpenStack. But now I'm here. Cool, cool. So Alex, introduce yourself. What do you do? Hi, everyone. Yes, I usually prefer to go just by Alex M. Because my last name is non-pronunciable easily. I was going to try, but I was like, ah, I don't know. I better not say it. But I can try. Yeah, it's my Toshantsev, which is low content stage. I'm a principal software engineer at Intuit. And I also am a maintainer of Argo. As you might know, it's a set of projects. So basically, I was involved with many of them. But most recently, I'm working on Argo CD specifically. And I'm basically developing it. And I'm responsible for running it into it. And so that's why I know a lot about scalability, challenges. And yeah, this is happy to share what we've learned so far and help you use Argo CD. Cool. Yeah, actually, one of the things that a lot of people don't know is that Intuit actually was a customer. So actually, I think Alexander maybe would have a little bit more background about the acquisition. And maybe I'm getting this wrong. So I'll just kind of give an overview. But actually, Argo CD came from what was the company named Appalachics? Appalachics? Appalachics, yeah. Yeah. And actually, Intuit likes that product so much that they actually acquired. So can you kind of go over just a little bit, or Henrik or Alexander, how that all played out in kind of like the birth of Argo CD? Yeah, and maybe I wouldn't give Intuit a bit more credit. So yes, we got acquired. And we bring in some Kubernetes knowledge into Intuit. But Argo CD actually was created at Intuit. So it was born into it. Yeah, it actually started a bit earlier. There's Intuit embarked on a modernization journey. Intuit has been around for a while, right? And there was a lot of legacy stuff, standard data centers. We ran into engineers are moving too slow. There was just too much red tape and things that needed to happen to do new releases, spend too much time managing hardware. That's not really business critical to a software and a financial software company like us. Hardware is managing hardware is not in our core business. So there was a modernization journey that started quite a bit before that. And it's part of that they started looking at now, how do we increase developer velocity? How do you modernize our whole technology stack? And it's part of that modernization is when they started looking at plastics and they at that point, like Alex had had Argo workflows, which is one of the other three projects. And then shortly thereafter, I think this was about three and a half years ago, Alex that our products was acquired. And I think just a few months after as the Aplatics team and some more people at Intuit started building out the new platform, this modernization platform that we very thoughtfully named the modern SaaS platform. CD was born. So CD came out of that in the early mid-2018, somewhere around there. It's like, you know, the better, yeah, better than we need. Yeah, I was saying. I can tell you a little bit about why it was created and what was our goals. So I think it's important that we were not trying to just create a new project for the sake of project creation. And we actually had a big kind of challenge. We were supposed to just onboard potentially thousands of developers with lack, with not so much of Kubernetes knowledge on Kubernetes. And that's why we were just trying to create a tool that would simplify it. That's it. And I think that really explains a lot of design decisions that we made with Argo CD. So basically, we knew that there is no way we can ask any team into it to run a Kubernetes application to manage, you know, Kubernetes configuration. We knew it's supposed to be, you know, the simplest possible experience. And that's why Argo is kind of, it's like a, it's a GitHub separator as a service. So you don't have to run anything to use it. You just open the URL and then you, that's it, you basically can consume it. And it also explained why Argo CD has to reach user interface. It's a way to save support team from a ton of questions. Well, in Argo, you guys specifically tries to highlight problems in your application. If your pods, let's say you're not starting, we have big red icon and it may be even, you know, eventually it will get even bigger. This is so that developers, you know, could just view that problem and immediately find the answer of why, you know, for this failing and don't bother. Basically it's meant to scale a platform team kind of, you know, and replaces the part of work that platform team is doing to support application engineering teams. And just to give you an idea of scale, I think when this started, I think into, we had about 3,000 developers are there about. And like Alex said, virtually no Kubernetes experience. And today we have about 5,000 developers on this platform. It's fully on Kubernetes, you know, all cloud native and we closed our last data center, think about two years ago. So this is fully, fully public cloud, fully cloud native, out of 5,000 engineers. So it's been a pretty quick or a very quick shift, I would say into this, into using this new platform and migrating off of, you know, closing our old data center. So that's why it was extremely important, just like Alex said to make sure that this could be picked up without needing a year or two of training. Yeah, oh yeah, it's kind of like, well, what's the saying necessity is the mother of invention, right, like you were, yeah, this need. And so, okay, well, let's just build a tool that is very easy to use. That's actually one of the things I really, really loved about Argo when I first started working with Argo CD, was the learning path, right? The learning curve is very, it's not steep at all, right? So it's easy to use, it's easy to get started, but then it's kind of, you know, like learning, like you can see back here, it's kind of like learning the guitar, right? Like getting started is actually pretty easy. You're like, hey, oh, hey, like, you know, I can play a few chords, but if you want that mastery, it's there for you as well, right? So it's kind of, it's flexible. So I kind of equate that to why I liked Argo so much is that to get started, it was really, really simple. And it sounds like by design, that's how it was. To onboard, you had thousands and thousands of engineers. You didn't want to bog them down with all the infrastructure stuff or all the delivery stuff you didn't want. You just want them to be able to code rapidly. Yeah, that's right. And I think a little bit of like, we took care a lot about, you know, easy getting started experience because we had really a few lessons when we were at AppLatics, the hard way. Basically, I think it's just crucial. Unless you make it easy to use, it's extremely difficult to promote a project in open source because there is a lot of competition. And that's what, you know. Yeah. The point of entry is important, definitely. That's definitely true. For forcing engineers to use something they don't want to use, it's not a good grasp of risk for success. Even internally, even internally, when you know you can point to their whole arm like hand, like they did when it was in the army, but you know this, even if you can do that and kind of all and tell people and force them to use something, that's not going to bring you a lot of happy engineers. But as we open source this and wanted to get it out in the community, it's even more important because then people have a lot more choice. And if they don't like it, they're going to go somewhere else. Yeah, yeah. Exactly. Go ahead, Ax. Since we're talking about the history, I think it's nice to mention that, so yes, we started that into it and we open sourced and obviously, you know, the project that was just created not necessarily has a ton of users and into it was kind of a driver of features for a long time. But I guess maybe after maybe one year of development, it flipped and till now, I think open source community kind of way ahead of, you know, the users in open source actually even moving even faster and they bring more ideas than into it. And it helps us a lot. I guess that's where open sourcing pays off. Yeah, and... Yeah, yeah, actually it's funny as engineers, right? Like you write something, I know I have a couple of pet projects, you write something and you think it's useful and you're like, okay, it's cool. And then like you show it to other people and then they have ideas, you're like, well, I would have never thought of that. Like if someone like, or like even in like, you know, internally in your internal team, you guys have your own needs. But when you open source something, when you show it to other people, those ideas start coming in like, well, you know what, that's a really good idea. I never would have thought of that myself. It's a great use case. So that's, there's actually a question here if you guys are open to taking questions. Go ahead. There is actually talking about the history, right? So this is from Washari, one of our regular viewers, right? So welcome. He says, he asks, since we're getting a history lesson here, what would you do differently today? In other words, what are you struggling with Argo CD today and what are the current pain areas that you see? Yes, I think one thing that I would change is that we were too focused on, you know, this kind of multi-tenancy use case. And we kind of, so what we did wrong in the beginning, can we still try to improve it? And we keep doing it. Basically, Argo CD was a tool that helped manage applications declaratively, but itself it was managed imperatively only. And that was something community point us to, they told us like, how come you have to click buttons to configure Argo CD as opposed to, you know, use Argo CD to manage itself. I think we improved a lot. And we keep doing it, you know, like kind of getting to a perfect date where you actually can't, all you can do is to make file changes in Git. And this way you can configure Argo CD. Yeah, and I think this has been improved a lot recently. Yeah, but obviously it would be much easier to get to the state if we started from this mindset. Nice, yeah, yeah. I actually, or funny that you mentioned that Alex, I actually, a lot of people have actually been solving that problem with like Helm, right? Cause I've written Helm charts to deploy Argo CD, but essentially you need that day zero thing, right? So basically it instantiates Argo CD and then I just apply the manifest that Argo CD uses to then, you know, kind of that's like my entry point, but it's good to hear that, you know, those you're making those incremental changes to make it more declarative. So, and then, so before, now before I get to my next question, this next question is actually for me. So, but I'll say it out loud is it has nothing to do with you guys, cause you guys support this easy. Is that why this red hat doesn't support Dex out of the box is, so actually. I'll let you handle that one. Yeah, yeah, exactly. So the Dex integration is actually there. You can actually can use it. It's just we can't support the Dex image itself, right? So you can't pick up the phone and call red hat if something with Dex breaks. So you have to go upstream. So we can't officially say we support it, but it actually works, right? It's just Argo CD, Argo CD that a hundred percent works with Dex. That's I think the primary or, or I think you do have a OIDC connector directly. I think, but, but you can use Dex as well to broker that as well. But, but yeah, so that's the answer. We can't officially support it, but it's there for you to use. You just can't call red hat for support. That's all. Just going back to the last question. I think we would have done differently. I think one thing that we've noticed is that it was almost a little bit of a victim of our own success. Cause Argo between the four projects have been growing very quickly here since the inception. And we spent a lot of time with the community, helping the community and a lot of companies that reached out to them. A lot of one-on-one, one-on-ones with various companies and helping them with various things. And, you know, red hat, you know, we're really excited about, you know, partnering with red hat and red hat stepping up and helping out and doing a lot of good work with Argo. And I think if that's one of the things that we maybe should have done early, just try and do more partnering and getting more other companies to come in and help out as well. Cause it was starting to, you know, strain our resources just with all the support and all the external work that we did, not necessarily feature building, but just, you know, answering questions and doing just random things like that. And I think that's a really important part of growing the community. You just can't do it by yourself. So I think that's something that maybe we could have started even earlier. But, you know, other than that, I think, you know, we had a good run so far. So, yeah, it's been, it's funny how open source, so now I'm gonna talk about open source cause, you know, red hat, right? So I'm kind of a big proponent, you know, but open source in the past has been, was seen like as more of a hobbyist thing, right? And more of like educational, more, you know, the universities are, you know, using it. And now it's more like a strategic advantage, right? Like you see companies like Microsoft, right? Like Apple into it, like going into the open source space and saying, hey, like this is a valuable way to improve our software and make those, have those contributions upstream is, now it's seen like, oh, you know, like this is a strategic move that we're making in our company. So I think that's a good point that you bring up, Henrik. It's an excellent way of developing software. I mean, I've been a Linux user since the mid-ninth, I didn't see how that community and that software has evolved. You know, I'm a big believer in open source and it's really good to be at internet and see, you know, how much energy and how positive, you know, that the whole management chain is about open source. And, you know, we don't monetize it. We do it, like I said, to build a good product in order to work with the community. So it's, I think it's really good way of developing software. Yeah, just second Henrik can say that, yes, I wish we start trying to, you know, partner earlier and I think we would be moving much quicker right now. And we still trying to basically what we're trying to do is we want to, you know, remove bottlenecks and share responsibilities and give more freedom to contributors. We don't want them to wait for pull requests to be merged, like, you know, for several days, we want to have separate owners who completely own, you know, a part of the project. And this way we can move in parallel instead of you know, going from PR to PR sequentially. Cool, yeah. And in, as far as you guys using Argos CD or like, so like, you know, it's advantageous, right? To software development, you guys have Argos CD, you guys have all these internal users. Can you, I know Henrik, you talk about, you may have a few slides for us, but can you talk a little bit about how Argo, not just necessarily Argos CD, but just like the Argo project, how are you guys are using that software and into it? Yeah, so I don't know if you want to talk about, you know, the details of the Dev portal in SAS better than I do, Alex, I'm not sure if you want to start off with that and I can fill out with some more numbers and data. Yeah, sure, I can. And I didn't prepare for that, but I do not have slides, but I think I can talk about it. I guess I would have to start from Kubernetes itself. I guess it's even more important. Well, it always starts with Kubernetes, right? I think I'm like cloud native. It's like, well, tell us your story. Well, it started with Kubernetes, right? So I think that's the seed for everyone, right? Yeah, I guess what, I mean, Argo really benefited into it is that we have a solid platform and basically we do not ask teams to spin up their own Kubernetes clusters. So we have the part of MSAS, as part of MSAS we have team, okay, the name is IKS and basically it stands for think into it Kubernetes service. And the idea is that we basically let users create clusters by just clicking a button. And we have a strategy of how many clusters we're supposed to have. So basically, you know, through experiments we learned that Kubernetes, I guess not yet, you're not yet, Kubernetes is not yet ready to serve the whole company. You must have several clusters. And it, you know, I think average cluster into it is around 400 nodes. And this is working well. If you go, you know, beyond that, maybe you will start facing some scalability challenges. So yeah, and that means in a company like into it, you must have a lot of clusters. And so you need some way to manage those clusters. And basically we have a journal product that let you create clusters. And then the product will take care of those, of maintenance of those clusters. Basically it will to care of upgrades, it will rotate AMIs, if there is any security fixes and so on. And that was a huge advantage for all users. Because if you use just the cloud, like just AWS, basically you're responsible for security upgrades and so on. So Kubernetes sucks at every, at least this complexity. And then the next thing into it, G is that we basically do not even let developers, we don't force them to create namespaces. The same product that creates clusters, they also create namespaces. So to summarize, I think we kind of have namespaces as a service thing. So you can click a button and you can, you don't give a name of your namespace, you just explain what you want to do. And you're supposed to say that, hey, I'm going to deploy a web service. I'm in the business unit ABC. And I'm part of group, subgroup of that unit, something, something. And you will get a namespace which has a meaningful predictable name in a red cluster. And so for end users, for developers, it looks like, not like Hiroku, but kind of they speak, you know, business terms. They explain to the platform that they want to run application to serve some business role and that will create them space in the right place. So hopefully that makes sense. Yeah. Yeah. And not on that, I mean, yeah, the developer portal that Alex mentioned we're agreeing in a click, click, click to start something in addition to the namespaces, it also, you know, creates Jenkins pipeline, creates a folder in Artifactory, creates three repositories in Git and basically sets up your whole environment and it deploys an example service. So it's super easy to get started and basically it links in Jira and Wavefront, you know, the other things that we're using. So basically in that portal, did you click the click and then, you know, you get basically everything, everything set up for you creating the namespaces and everything else you need to basically start deploying, writing a code right away. Yeah. And so I would say we have kind of two, you know, user facing applications. One is that the dev portal that I'm, okay, three of them, one is the dev portal that tries to, you know, integrate all the tools in one place. And this is what I think developer use the most. Next tool is RGCD. And this is the tool to troubleshoot your application in Kubernetes. And the third one is the tool that manages namespace creation and cluster creation. And this tool, I think mostly for, you know, for the platform team and, you know, if like, yeah, I'm pretty sure mostly platform team uses it to manage, I think around for almost like three to 400 clusters that into it. Yeah. And developers, I think we basically want developers to never use any of these tools, except, you know, you create the project once in the dev portal and hopefully you never need them again. But if something doesn't go right and let's say your application, you know, crashing production, then they will go to RGCD and try to troubleshoot it and figure out what's going on. And as I mentioned, they do not actually have to configure anything, you know, they don't manage RGCD configuration. It's all kind of abstracted away by the portal. Yeah, I could just share some numbers. So since we started going into that here, what scale roughly we're using. So since every application basically gets its own namespace, you know, we're looking at about 11,000 applications and about as many namespaces into it. And I think we're probably getting closer to 400 clusters in total. Some of the production ones, like Alex said, are getting up there, you know, hundreds of nodes and we have a total of somewhere between 15 and 16,000 nodes in total, but there are a lot of the smaller ones as well. So in total, we have about somewhere between 375 and 400 clusters serving our 5,000 developers. So one thing that we measure that we look at is the release velocity and checking, you know, how many releases the engineers do per week. So that's a measure of looking at how fast they can push out new releases. Not using that as, not necessarily as a strict goal in itself, but it's, you know, if you look at it historically, you can see that at least the release velocity going up and use that as a comparative number of how it can move and adding things like Argo, adding things that rollouts with Canary analysis and things like that can help us increase that release velocity. So that's why we think that's an important number. And you see here, Q and L applications are like in three years, it's basically gone from zero up to 11,000. It's accelerating more and more basically. Yeah, exactly, both because we're onboarding more developers, but the developers we have are also releasing faster because we're increasing our release velocity. So we have more developers, we're increasing their velocities. You know, it's like a double whammy, which means that everything is growing just faster and faster. Yeah, you know what I really like about, so what I really like about, by the way, that's really impressive, right? Going, you know, that trajectory going up into the right, really, really rapidly. But what I really like about that slide is it is something that I'm going to show internally to Red Haters here, because some of our consultants, some of our solution architects were like, look, you know, because you know, we're Red Hat, we're a reseller, right? We sell software there, we get customers. Our field would ask, you know, like our customers are worried about if Argo can scale. And it's like, I think that slide shows that it absolutely does scale. And I don't think your five clusters is going to be a problem for Argo City. So I love that slide. I love seeing how that, how you quickly onboarded developers and engineers just by introducing these tools that are meant to get the platform out of their way. So what I'm hearing is that all these tools that are being built, your Kubernetes as a service sort of thing, Argo City, you know, those automation that you're building around is really just kind of just removing the walls of getting the platform out of the way of developers. Yeah, I think automating, automating all the things, yeah. Yeah. And then I'm happy to talk about scalability because I feel like, you know, it's still not fair to say that Argo CD, you know, can scale. And definitely, yes, we do. Yeah, yeah. Things, but I think that's the, it's like a primary focus of pretty much every release like nowadays in every release we deploy, you know, we multiply, we improve things. And I think we still, and we, yeah. So don't worry that Argo CD is in the maintenance mode not getting new features. It's like not true at all. Basically the scalability and performance is maybe the most important topic in every Argo CD release. Actually speaking of scale, there's actually a few questions. There's two questions that came down with a glum them together. I'm gonna ask them as one question, right? So they're asking about Argo CD HA, right? So in terms of HA, but actually more specifically, how many instances of Argo CD does it take to run at that scale? So, you know, a lot of people talk about Argo CD HA and, you know, from my experience of using Argo CD, it's less about HA and more about running instances of Argo CD, but I'll let you guys kind of describe the scale of how you're using Argo CD. Yeah, so at Intuit, we have several, like we have an instance of Argo CD per segment and segment is a part of BU. And it was not done this way for performance reasons. It was mostly for, you know, to limit the blast radius. Basically that was like a business requirement. You know, I don't know, who VP's of a company they didn't want, they wanted to make sure they isolated from each other and I think it makes sense. And that's why I think we ended up having 36 Argo CD instances and some of them, you know, like clusters, some of them bigger, some of them smaller. I think the biggest one manages close to 3,000 applications. Yeah, and that's, it's like one of the biggest instances we have, plus we have one instance that we use internally to manage adorns in all Intuit clusters. So it kind of, it manages a slice of a cluster, but across the whole Intuit, in basically the small slides in each and every cluster. And I think it's, that's, and we pretty much use that instance to, you know, to challenge Argo CD and every, like basically getting more and more pressure, like new clusters got created and I feel bad every time when I have to bump resources and it forces, you know, the team to work on optimization instead of adding resource. So, and even if we get to that, I will share what we did in that release. So I think we had a couple of good improvements for performance scalability. But yeah, so I think about, like Alex said, I think about the largest instance, if we're going to compare it to the numbers, I just showed, I think about a quarter of the applications are on that, on that instance in terms of scale. That's, you know, it's fairly, fairly sizable. And another thing in terms of scale, you know, that we get asked about quite a lot is when you talk about GitOps, let's say the Argo CD per se is, you know, the number of repositories, it's such a pain, you know, you get too many repositories and get managing that as a hassle. Like I said, we do, we give developers three repositories per application. So I mean, you can do, you can do the math on how many repositories we have. I mean, it's a fairly large number and we don't really have any, any main headaches or concerns about managing that because it's all automated in our, you know, platform. Alex is closer to managing that, might have a slightly different opinion. You know, Argo CD kind of takes advantage of number of repositories because it's a natural way for us to kind of share the load. And Argo CD has problems if you, in Monorepo. And that was a challenge for us because we didn't have a case of Monorepo. And then recently we get to the same, you know, basically we now have it and we think in last two releases we had to rework some of Argo CD parts to support Monorepo case. And just to clarify, Monorepo is, if it's a scenario when you have one repository, there's hundreds of applications in that repository. Yeah, and it was not, it didn't have the perfect support. Now it could much better. Yeah, yeah. I'm actually a fan of the Poly repo, right? Like having a few repos. But, I mean, that's not like it's like the best way, it's just like a way of doing it. Yeah, completely different, yeah. Two completely different use cases. So it's good to hear that you're writing more support or better support for that or at least in terms of scale, right? And- It's one of those things you're running to, you know, when you start getting it out there in the community like we don't use it. So we didn't really much into that use case. And then, you know, you get it out to a larger audience and then suddenly, hey, yeah, these guys are doing something different that you know, we're not doing. So it's, and all that helps improve the project and the product, right? So it's all good. So you guys had, you know, this need and you guys are building all these tools. So, you know, how does that relate into, so let me ask this question in a different way. As I was asking the question, I found a different way of asking it. How into GetOps is into it? Or was it GetOps something you're like, oh, interesting that these people are used, you know, the users now are using Argos CD for GetOps. Was it surprising or was that always like an end goal of having Argos CD be a delivery mechanism for the whole GetOps practice? Or was it a buy, or is it just a byproduct? It was GetOps from the beginning. It's like, kind of, yeah, we realized that I think that the reason we chose it is because we didn't want to build things that you get for free from GetOps, you know, like we didn't want to build, we didn't want to introduce a database with history of changes because it's going to run and, you know, we're going to develop, so we kind of realized and we learned from, basically, before we started building Argos CD, yes, we looked at what existed already. And I think the biggest kind of, we had two other candidates. One was Spinnaker and one was Flux. So Spinnaker is not GetOps. And we kind of didn't, I mean, I shouldn't say like it, but basically we didn't think it's the best fit because Spinnaker was kind of proposing to introduce abstraction on top of Kubernetes, which is itself abstraction. We didn't like that. We didn't want to have abstraction on top of abstraction and we learned that it's really difficult to basically Kubernetes changing a lot. And that means we didn't want to abstract away Kubernetes from developers. That was one kind of thing. And another one, so we really like approach that implemented by Flux, but it didn't, we could not just go ahead and use it because Flux was kind of, it didn't have this, it didn't have a model to serve a lot of teams without asking them to know Kubernetes already. That's why we kind of had to combine two together and Argos CD was created. So, and yes, it was GetOps from beginning. Okay, so from the GetOps was essentially you built the tool with GetOps in mind. Yeah, pretty much what we, I mean, we consciously decided that we want to build GetOps as a service and yeah. Yeah, all interesting, yeah. If we, I'm not sure if we're going to talk about GetOps, but maybe I'm simplifying it, but in my mind, GetOps it tightly coupled with Kubernetes. It's not maybe not the most popular opinion, but I think idea of using storing manifests or some declarative definition in Git and automated way to apply it, it existed forever, but it's just extremely hard to do. It's not easy. It's easier to say, but difficult to implement. And just when Kubernetes got created and it kind of made that problem trivial. Now with Kubernetes without any other tools, you just have KubeCTL commands def and apply. And it will do exactly the same work except it's not super convenient. And that's what we realized and we just decided that it makes sense to build an application that does KubeCTL apply, KubeCTL def for you, but makes it convenient and easy to use. Right? And that's what pretty much our KubeCTL, it's a, I know that our product managers do not like the definition, but we kind of sometimes call our KubeCTL glorified KubeCTL apply. Don't say it. Don't say it. And it has, I mean, it's, as I said, easier to, yeah, if you try to repeat it manually, you will have to go for a long journey. So I'll just introduce some, bring some advantages to the table. Yeah, it's, and also what's really cool about Argo, just also is you guys were talking about the UI. I think you guys inadvertently created a great developer UI without even trying with Argo CD, right? Is, you know, you needed a way to visualize it and it's actually a good UI, you know, for, from the application side at least for Kubernetes. So that's, that was another, I actually like using the Argo CD UI versus like using something like Octin or even the OpenShift UI, right? I'll flip back and forth because I'm like, oh, I want to see a representation of my app, not necessarily it's individual components. And so that's just another thing that was really cool about Argo CD. I can share some kind of, some other thoughts about UI. It's kind of slippery slope. We didn't want to transform Argo CD into Kubernetes dashboard because it's not the main focus. Yeah, yeah. What we, you know, what we're thinking about adding a new feature is that if we already have to collect the data to do GitOps, and we just have to show it. Yeah. It's worth showing it because it's so cheap. We just need to build UI to show it. If, let's say, if we need to build something to start collecting the data, then maybe no. And one example is that, for example, we could start collecting metrics using Kubernetes and just putting the metrics to show to the user, you know, information about CPU usage, memory usage in UI. But I think it's too much. It's kind of, maybe we could let Grafana and Prometheus do it and instead of... Yeah. Well, yeah, it's, because Argo CD for the most part is stateless, right? Like you just use at CD. You just use at CD, essentially on the backend. But, you know, when you deploy Argo CD, it's just, you know, red as cash and then the controllers, really. And then, you know, because the idea is, you know, everything should be stored in Git anyway. So I should be able to spin up another Argo CD and apply those manifests. And I just get my app back. So there's not really anything to store, which makes it lightweight, right? So there's... And then there's something that a lot of users who talk to, they like about all the four Argo projects that they're not trying to be everything. You know, they're building blocks. And you can take an Argo CD, Argo rollouts, Argo workflows, events and put together, use them individually, put them together and build, you know, whatever platform, whatever your needs happen to be out of that. But, you know, we're not trying to combine all the four projects into one monolith. So like Alex had tried to do too much monitoring and like just trying to do, you know, what makes sense without getting too big and bloated, keeping them as building blocks. Yeah, we got, we actually got some questions here. Let me see if I can try to glom something, because there's a few of them that came down, try to... So someone asked, you know, there's things like sync waves and hooks. Are there any plans for any other modular tasks, for example, for example, like pipeline abstracts or any other edge use cases, right? So, for example, right? So, you know, sync waves, was there because, you know, stateful applications sometimes just need an order, right? You can't just apply things all at once. So are there any other like edge use cases that you guys are looking at in order to incorporate into Argo CD? I think we keep trying to solve the problem of managing Argo CD itself. So basically, I think we think that more or less feature-complete in terms of syncing resources within applications. But we keep getting questions about how do you orchestrate syncing of multiple applications at the same time? What if you have such a big application so it kind of consists of independent blocks? And this, we don't have a good answer. We come up with a kind of pattern. So you can create Argo CD application that create applications. And then you can try to use sync waves to orchestrate, you know, syncing of applications in a particular order. And that's just too complex. Like we feel like first-class support is needed to support that use case. And we started from WizardHat to build application set project. So application set help you create applications. And basically, to give a concrete example, let's say you have, let's say you need to create exact same set of resources in each and every cluster. So in Argo CD world, you would have to create application per cluster. And that's a lot of manual work. With application set, you can create a single resource that describe what kind of application you want to create in each and every cluster and it will do it for you. And I guess the next step is to add more either into Argo CD itself, itself or into our application set. Add some features that let you define how you want to sync all these applications. You might split them into waves. You might say, I want to sync all of my staging applications first, then wait and then maybe sync all production applications, but do some kind of, you know, a form of canary deployment of your changes. And you can, you know, split your production clusters even into waves, like listen. Yeah, actually I was one of those voices, right? So I know that our engineers worked, Red Hat engineers have worked on application sets. And like the first thing I asked is like, it'd be really cool if I could do waves with application sets. So treat applications as almost like components, right? Like how you treat objects. So in Argo CD application, there's objects that the application sits under. Now I almost want to treat the application the same way as I would like a deployment, right? I want to be able to, you know, deploy the first application and then using, you know, application sets, then deploy the second one and then have a kind of like a wave or like a canary, like you said, that's really cool. And that's something, again, like you release something and it's like, all these people have all these ideas. Like, well, I wish I could do this. I wish I could do that with application sets. And it's just like another one of those cases where you had a simple use case and all of a sudden there's all these ideas that pop out of that. Yeah. And it's on the roadmap. So we're kind of moving step by step and eventually I'm pretty sure Argo CD will get that feature. Really cool, see here. So someone actually asked about patching deployment but I'll actually replace that question with my question, Alex. I asked this question right before he went live but there is a, you know, applying manifests using Qubectl, there's like some ramifications about just using apply. Then actually if people don't know about there's actually some weird use cases that happen like when you apply versus replace versus. So can you kind of go through a little bit how the first explain what the three-way diff does in Argo CD. So there's something called the three-way diff for those we don't know. What it is and then how it's used in order to keep the desired state and then the running state in sync. And just to kind of clarify, I think we did not invent three-way diff. It's in Kubernetes. This is the way Kubernetes offers developers to manage resources. And the idea is that you can have a file doesn't matter where in Git or just on your laptop and that file has the YAML or JSON that, you know, it kind of explains what you want to have in a cluster. You can also have that same resource exist in a cluster already. And at the same time as part of that resource you can have the third state which represents what did you want to have last time. So just to clarify, let's say today you want to create a deployment. You would create a file and then Qubectl applied that file and it will create the resource. And that resource will have a notation that kind of has data about what you meant to have in a cluster. And then tomorrow some developer go and change an image for example, without you knowing. And then you can run Qubectl diff and you can feed the file stored on your disk. And basically what will happen is that Qubectl will try to compare what you meant to have last time, what you meant to have, what you want to have now and what you really have. And it will show you, it will catch the difference. And this let you catch, for example, you might want to just don't specify. Yeah, I'm trying to explain why is it so complex? Why don't you just compare to states? So there are each cases, like for example, let's say you don't care how many replica sets you need to have on your deployment because you have HPA and HPA manages it for you. So in this case, you can simply don't specify number of replicas in your file stored in Git. And that lab kind of that, you know, it will basically add some flexibility and that particular field can be managed imperatively. But let's say you change your mind and you can introduce that field in a new version of your application. And then RBCD or QPCTL, it will notice that you actually want to manage that field and it will show up and if the field is not what you want. Or maybe like another example, let's say you used to manage number of replicas in Git and you want to switch to HPA. If you delete that field from Git, it is still stored in the annotation that stores your previous intention. I know it's complex, but basically- Yeah, it is, it is. You have kind of, so Kubernetes basically store the set of fields that you wanted to specify last time. And if you change your mind, it will compare what you want to do today, what you really have and what did you want last time. And that gives enough information about the change, like what exactly you want to change. And that change can be replacing old value is the new value or removing something that used to be managed. And yeah, and yeah. And it's complex that I had to wrap my head around it. And basically RBCD uses the exact same logic as QPCTL uses. And that's important because if suddenly RBCD, I don't know, crashed and doesn't work, you can just keep managing your application using QPCTL. You literally, we're trying to make it compatible with QPCTL. QPCTL has some features. For example, you can apply a folder of manifests plus you can specify a label. And you could say, okay, I want to apply all the manifests and I know that all manifests of my application labeled with some label. And this way you can detect resources that were deleted from Git that no longer managed in Git. And that's a feature of QPCTL. And basically RBCD is trying to make it as easy as possible to switch to QPCTL. It also applies similar labels. And yeah. Again, yeah. Again, you're saying what something that Hendrik doesn't want you to say is like QPCTL. It's like a glorified QPCTL. Well, I think also a perfect example of a three-way diff is that someone can, so let's say you have a deployment and you do on the command line, you do a QPCTL patch of the actual container section, which is an array, by the way, to change that image when you run Argos CD sync, it'll take into account that last applied, right? Which is you change the container image to something else and then you're desired and then all of a sudden you'll have two containers in the deployment and it'll always be out of sync only because it does an apply, meaning that it'll append to that array of containers versus replacing it because you've applied it, right? So it'll take that into account because it's an apply, not a replace that Argos CD can do a replace, by the way. You just have to be careful with it because if you have stateful applications with PVs, it'll delete the PV, right? When you delete the resource, so you have to be a little careful. But I think if you want to learn more about declarative management of resources using Cargo CD, you can as well just read official Kubernetes documentation and it's basically it's one-to-one mapping. Cargo CD uses the exact same tools we used to even just fork exact QPCTL under the hood to get the diff and to apply changes. And just for performance reasons, we switched to a library, which is part of QPCTL. So Kubernetes community refactored QPCTL and they converted parts into a library. And that's why we could just take advantage of it. We import code, which is used by QPCTL and then execute that code to make it quicker. So yeah. Which is actually part of the reason why Argos CD is so easy to use. It stays true to the platform, right? It stays really close to the Kubernetes platform. So we have about 10 minutes left. I want to give the floor to either you, Alex or Henrik. Whatever you guys want to talk about for the last 10 minutes. You guys, Henrik, I don't know if you want to talk about roadmap. You want to talk about what you guys are doing. It's free talk from here on out in the last few moments of the hour. Yeah, we can talk about anything, really. I mean, that's why we're here. Yeah. Yeah. So on the Romo sites, you know, we're, we're, so I guess the two parts of that really is it's one, we're working with the community, you know, like our partners like Red Hat and the community to build out features for the Argo projects. And I think like Alex said, in some ways, we've reached, we're covering our use cases for the most part now that we're doing it into it. So the more input we can get from y'all that are listening to this, you know, what your use cases are, that would help tremendously, you know, shaping the future roadmap of Argo CD. And what we're doing internally now also is we're focusing a lot also on Argo rollouts. And that's kind of a little bit of a missing key still in our progressive delivery strategy and how we do progressive delivery into it is getting to a point where it's all fully using Argo rollouts with Canary analysis and making sure that, you know, we have the full progressive delivery process and to end automated. So we're looking at rollouts as well. And there are some, some cool things that are being worked on how to, and we know that a lot of people that use rollouts together with Argo CD. So there's some work that's being done on how to get them more close together without building that monolith like I said before, you know, there's like an extension framework that's been worked on where we can basically pull in anything and show it in CD, but we're using rollouts as one of the first, first examples of that. Just getting the parts together without breaking that building block strategy. So I think that's, and I remember we did the user survey a few months ago, external user survey for Argo CD. There was a lot of the people who responded are looking at user Argo rollouts. So that's definitely one of the things we're looking at and how we bring those two more closely together. Internal, we also have some other projects we're working on that are still being validated, but they're both around GitOps and Argo CD and on the workflow side, you know, that we're looking at open sourcing here later this year. So there's a lot of interesting additional stuff coming out there that's Argo related as well, but still a little bit too early to go into details there, but hopefully, you know, come as we get in closer to KubeCon and we're hoping to share some more of that stuff as well. And since we're talking about features, yeah, and go ahead, sorry, go ahead. Go, no, no, go ahead. I was just gonna talk about KubeCon. So I'll let you talk first and then I'll talk about it. I won't take all five minutes, but I just wanted to use a chance to market a little bit, one new feature that I'm patient about. So as we mentioned, Argo CD kind of shines when you need to, you know, serve needs of multiple teams and you have a platform team that manages Argo CD and everyone else uses it. At the same time, we heard some feedback from users who have no, you know, they have no such need. They don't need to serve needs of many other teams. They just want to have a tool that manage, you know, resources in the cluster and they basically just want to run and use it. And for these users, some features of Argo CD like DAX, SSO, RBAC, they view it as a, you know, unnecessary set of features. Like they don't like them. Like why do you have to configure admin? If you are already admin, like you have Kubernetes RBAC that protect your cluster. Why do you have to set up one more protection? And so that this is, we're trying to improve it. And very likely in this release, we're going to have kind of a new way of, like a new distribution of Argo CD. You can install it. And it won't have any of multi-tenancy features. Basically it will have just a backend part and no IPI server. And that's why the code name of the feature is headless Argo CD. The name might change. We don't really like it, but yeah. But basically you can run headless Argo CD and we just took all existing features of a client, the web UI and the CLI part and kind of package it into the client facing CLI. So you basically it's the same experience, but you don't have IPI service. And all you have to have, you just need to have Kubernetes access. Basically you need to, your CPSTL should work locally on your laptop. And you can run Argo CD dashboard command, for example, and it starts you out locally. Or you can just execute all the CLI commands as usual, but it won't try to talk to IPI server. It will talk directly to Kubernetes. And I'm really excited about that feature. And I think it will be interesting for Kubernetes admins, for one structure, Kubernetes clusters. And coming from a telco background, I can kind of see how this lighter weight Argo that's more isolated for a single small cluster could be very useful for like IoT or telco use cases. So it'd be really interesting to, obviously at Intuit, we're not gonna roll out 5G. But it would be really interesting to hear from the user community as well. And if you have anyone that has IoT use cases or telco use cases to ping us. And I think this under the air quotes, headless Argo I think has some really good promises as well as some opening up some new use cases that we haven't been able to solve with Argo before. Yeah, well, we'll definitely talk about a new name, I guess eventually. I guess we'll talk about Helmetless Argo. I don't know what you would write the logo, the same Argo now, but without the helmet, I'm not sure, cool. Yeah, so we're just about done with time. I am gonna talk about KubeCon a little bit, right? So KubeCon is just around the corner. We're actually the GitOps working group. We are putting up GitOpsCon again, right? And so Armless Argo, that's just funny. Anyways, someone wrote a chat. The, we're running GitOpsCon, right? CFP is open. This is gonna be a hybrid event, right? So if you can't come to Los Angeles, right? I have, if you are in need of a hotel, I have a room here since I'm in LA, but if you can't come to LA there will be a virtual part of it, right? So don't think you can't submit a CFP. I put the CFP in the chat here on Twitch. So go ahead and check that out. CFPs close next, not this Sunday, but next Sunday. So get those in. So with that, I just wanna thank you, Hendrik, Alex. Thank you for sitting here chatting with us, putting up with all my Argo CD questions. And I haven't made reservations yet for KubeCon. So do you take reservation for that fair route? Oh, there you go. Yeah, exactly. It includes breakfast too. So I'll even have a continental breakfast there. This is just like a Marriott. And also if you have a few seconds, I'd also like to plug, there's actually gonna be an ArgoCon in December. Yep. So I just wanna get that out as well. There's gonna be a single day event in San Francisco on December 8th. So the program is still, CFP is open, program is still being put together, but for those who are in the area or can travel for a single day event, we'll have that. The website is open, the registration is open. So if you wanna learn more about Argo, that'll be a great time to come and hang out with us, learn more about Argo. Definitely, I'll be there, right? I'm in LA, it's a short flight for me and I love the Bay Area, so great food. So yeah, so thank you. I'm driving, you'll probably get it faster than I will. Oh, okay, all right. All right, cool, cool. So yeah, so thank you everyone for joining. Again, thank you, Henrik Alexander. And actually, congratulations. Bobby, by the way, our intern ran this whole show for us solo. So he's behind the scenes, so everyone wave at Bobby. And yeah, thank you, you can see us out, Bobby. Thanks, thank you everyone. Thanks everyone, thanks for having us. It's a pleasure.