 Hello, and welcome everybody. Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Taylor Doulizal, head of ecosystem at the CNCF where I work closely with teams as they navigate their Cloud Native journeys. Every week we bring a new set of presenters to show how to work with Cloud Native Technologies. They will build things, they will break things, and they will answer your questions. In today's session, we have several folks from the Argo community. Howdy everybody. They will be covering Argo's vibrant ecosystem and some community in maintainer Q&A. This is an official live stream of the CNCF, and as such is subject to the CNCF Code of Conduct. Please don't add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, please be excellent to one another and respectful of all fellow participants and presenters. With that, I would love to hand it over to the team to kick off today's presentation. Folks, take it away. It'd be great to hear from you. Thank you. Maybe we could do a brief introduction and go around and if you want to put up, I've got the slides so you can see who we are. But my name is Dan Garfield. I'm a co-founder and chief open source officer at CodeFresh and very proudly an Argo maintainer, though a very minor one, if that. The real guy to talk to is Hong, because you've been around since the beginning. Thank you, Dan. Thank you, Dan. So my name is Hong Wang. So I'm the co-founder and CEO at Acuity. So I and Ms. Azar team mates, we started Argo project about six, seven years ago and such amazing journey. I'm really happy to be here and happy to share what's our current progress. Thank you. I hand over to Julie. Hi, I'm Julie. I'm a staff software engineer at Intuit, and I am working on Argo workflows and Argo events. I'm in at least senior Argo person here, but I will do my best. I've got some support from my colleague here, Bala. Hey, I'm Bala. I'm an Argo workflow maintainer. I'm working into it as a staff engineer. I'm a core maintainer in the Argo workflow and the event. Yeah, perfect. I'll give you support, Julie. Thanks, Bala. Well, you do a lot of the heavy lifting on the project, that's for sure. So we're going to talk about some project updates that are going on today. We're going to do some demos today. But this being the CNCF livestream, there are maybe some people that don't know what the Argo project is. So we'll do a very brief introduction for those people. Argo project consists of four main tools, forming projects, essentially. Argo workflows, which is a general purpose workflow engine for Kubernetes. And it's used very popularly for a lot of data science. And we'll talk a little bit more about that project and who is using it. Argo events, which is primarily for working with Argo workflows, it's triggering events and triggering actions that take place. And it works really well as sort of an event bus for Kubernetes. Argo CD, which is probably the world's most popular GitOps tool and is really effective for deploying and managing deployments on Kubernetes. And then Argo rollouts is for doing progressive delivery. Blue-green deployments, Canary releases using Kubernetes. All of these projects can be used completely by themselves or they can be used together with some integration between Argo CD and Argo rollouts and some integration between Argo workflows and Argo events. But if you're not using Argo CD, you can still do Canary releases using Argo rollouts. You don't need to use Argo CD. If you're using Argo CD, you don't have to be doing progressive delivery. You don't have to be doing the workflows. If you're using workflows, you don't have to be using Argo events to trigger your workflows. You can be using Cron or other things like that. So these are really architected in such a way to be very flexible. And you can pick it up and use it for the purpose that it's designed for. We joined the CNCF as a project in 2018. And we moved into Sandbox. We've migrated now into what's the next level of Sandbox? I just escaped my head. But incubation. Incubation, thank you. And then we're moving forward towards graduation. The project has been incredibly popular. And we actually are now one of the fastest growing and most popular projects in the CNCF. There was a momentum report that just came out of the CNCF that showed how much development activity there is. And Argo was just behind two projects, Kubernetes. And I think it was open telemetry. Oh yeah, it was open telemetry. Yeah, it was very, very close. So very popular project. When we look at GitHub stars, over 20,000 GitHub stars and growing very rapidly, the project is maintained primarily by five corporate sponsors. Acuity, which Hong represents. BlackRock, we don't have a representative from them today. Codefresh, who I represent. Intuit, and we have Bala and Julie joining from Intuit. And then Red Hat. And of course, there is a huge sea of community maintainers that have been independent and operating on the project for a very long time. And so they're critically important. So we have this wonderful partnership between these companies, as well as all these independent contributors. And Argo is trusted by Google, NVIDIA, Capital One, Tesla. I mean, the largest companies in the world are using Argo. And not just the largest companies, but also the smallest companies are using Argo effectively. You don't need to have a huge degree. We've got a really exciting event coming up, ArgoCon. And I think pretty much all my slides have it sitting up in the top right corner. It's going to be next month, September 19th, so the 21st. And we have a huge number of speakers that are coming to speak at that event and share their use cases, share what they've been doing. And you don't want to miss it. So while the live stream is going on, go Google, ArgoCon, go register. If you're attending online, it's a free event. If you're attending in person, there's a small fee. We're all going to be there in person. We would love to see you here and meet you in person if you're able to make it. But that's the reason for the season. That's why we're getting together to talk today is because we do have ArgoCon coming up. It's very exciting. One other thing I was just going to mention about the Argo project, many people are familiar with Net Promoter Score. But basically, it's a measurement of how much your users like or dislike your product. And when we ran the survey for Argo workflows and Argo CD, they were both over 60. And a good NPS score is over 20 is considered very good. Anything over 30 is considered excellent. Over 60 is like you could start a religion off of the love for the product. That's the level that Argo is at. So the community is just incredibly passionate. They love using Argo, whether they're using Argo workflows for doing data science or other kind of general purpose workflows, or if they're using Argo CD to deploy their software, it really simplifies day-to-day operations with Kubernetes and makes life a lot easier. So in summation, go register for ArgoCon. We want to see you there. There's going to be cool swag. There'll be cool speakers. There's going to be some interesting announcements going on. We'll be talking about the project. And it's going to be really, really exciting. So with that brief intro out of the way, I think we wanted to maybe start with Argo workflows. So Julie and Bala, I'll pass it over to you. Thank you, Dan. Let's see. Do you want to stop sharing your screen and I'll share mine? Sure. So can you guys see my screen? Looking good. All right. Yeah, so I'm just going to talk a little bit about Argo workflows and what some of the new things are that we're releasing as part of version 3.4, which I guess we are targeting that for ArgoCon itself. So I think that's September 19. Just some GitHub stats here. Argo workflows has now had 2,400 forks and 11,600 stars just for a little more bragging. I think probably most of the audience is pretty familiar with workflows, but I just want to start with a simple example here. Basically, it's a pipeline of steps where each step is basically a different pod running in Kubernetes. And you could have either a linear pipeline of steps or it could be a directed acyclic graph like is shown here. And you can have pods that run that basically pass outputs to other pods to use as inputs. And then you can have the whole thing triggered on a cron schedule by a webhook. Or if you integrate Argo events in, then you actually have access to 21 different potential triggers like Kafka. And I think that's something that sets us apart from our competitors. OK, so some of the new features that are coming out as part of version 3.4 involve artifacts. I'm going to talk a little bit about those. So artifacts are basically files produced by the steps of your workflow that you're putting cloud storage generally. So those could be temporary files that where maybe one pod produces a file and it gets consumed by other pods. Or maybe it's some final artifact produced by your workflow that you want to hold on to. And so this can be all sorts of different types of files, including database dumps or Spark files, in addition to text files, et cetera, et cetera. OK, so some of the new features that we developed that are related to artifacts that are coming out in 3.4 are the ability to visualize artifacts in the UI and then to automatically garbage collect your artifacts as well. So let me just show you some of the new visualization capabilities. So basically this is showing in the UI a workflow that's been run. And so what's been added here is to this graph is basically all these artifacts that were produced by this particular step in the workflow. And so now, you can do it like a command plus plus just to make it a little bit easier. Thank you. Perfect. Yeah, OK, so now basically if you've got a file produced with a well-known extension, like a text file, for example, then now there's an iframe that opens up where you can see what that file looks like. A JSON, a ping file. And then also, if you have a, if your workflow step is actually producing a web directory, now that's basically fully navigable over here. So in this example, basically the step actually produced this graph image and then embedded it into an HTML page. And so we're actually showing that here. And then this HTML page actually has sublinks. And so you can actually navigate through those over here too. Yeah, so that's basically the new visualization capability. Let's see. And then so the other feature related to artifacts that's coming out as part of 3.4 is artifact garbage collection. So the issues that we were trying to address are basically that users had to manually delete their own artifacts by going directly to S3 or whatever their storage engine is. And then if they don't delete them right away, then they're basically paying for storage that they don't need. So now, basically we've enabled you to specify in your workflow that you actually want these artifacts garbage collected automatically, either when the workflow is completed or when the workflow is deleted. And so you can basically define that either on the workflow level up here or on the artifact level down here. In this example, let's say your workflow is producing a whole bunch of artifacts that maybe you don't need to hold on to those long term. But maybe it produces some final artifact that you do want to keep. So basically you can set your overall strategy to basically delete the artifacts when the workflow is deleted. But then you can actually override it on the individual artifact level to say, actually, I want to hold on to this one. And currently, this is just implemented for S3 and Azure. But we're hoping that there will be community contributions to augment that. And that's all that I have. Yeah, awesome. 3.4, how long, how far out do you think we are from 3 to the 3.4 release? Well, our plan is to release that at ArgoCon or the same day, I guess. That's an extra good reason to come to ArgoCon. Get the latest 3.4. Yeah, but the release candidates, I guess we're on our side. It's already out. So if anyone wants to try it out and log some bugs, feel free. I know we'll do some Q&A. And so people that are watching, feel free to throw questions in. And we'll try to take those as we go along. But then I think, Hong, you were planning to do a demo of upcoming Argo CD release. Can you guys hear me? Yeah, we hear you great. All right, cool. So yeah, what an exciting moment. So we have 2.4 already released for a while. But 2.5 is on the horizon. So today I will give a little bit of a demo of both. And here is the plan. So for the people who didn't know about Argo yet, I think it's highly unlikely. But I'm still going a very basic example to setting the core functionality and what that looks like. We'll go through the web terminal feature, which is already shipped in the 2.4. And there are three more new features, brand new features, coming on the 3.2.5, which we'll talk about it and also do a quick demo on one of them. So first, let's go through the quick demo about what's the core functionality of Argo CD. So this is the brand new MT Argo CD. So what you do is you need to create a new application. That's easy, so you can see demo. Name the application. This is a new feature. I'm not going to go deep into this concept. It's called application namespace. I will mention a little bit in the feature in the 2.5 soon. Project, we go with the default. And we want auto-create namespace. We will pick a repo URL. So I will just use my personal one, which is on GitHub. And you can use the guest book pass. We want to deploy it. Cluster, we just deploy into the same cluster, calling cluster. We're using the namespace demo. That's it. We catbox everyone and we create a namespace. Sorry, we created the application here. I think it's just trying to pull the image. OK, cool. So we got the application. So right now it is out of sync. Means it is not being deployed yet. So you can always sync it. And after you sync it, you see this visualization about what is all the object belonging to this application. So it has a service object, has a deployment object. And you can see the hidden object also, the replica set, endpoint, and those things. And those are the part. One thing we want to do to demo is the GitOps concept quickly. I think a lot of people already know about it, but I will just quickly do it. So here, from the repository, you can tell, the configuration for this deployment replica is three. So I want to change it. So I will just add it from the GitHub. I will change that to be five and save it. And normally, you should be using the PR. I think that's the better way to create a PR. You have a second eye on what you have been changed. And when the PR was merged, and it will be rolled out to the cluster. So from here, we can do a quick refresh, just trying to see the change. And you can tell it's out of sync. And we can quickly know what is the target state. So replica five is our target state. replica three will be overrided. This gives you the confidence about what is going to be changed. And that is very important when you, especially when you handle the production system. And you can always do the auto sync by enabling the auto sync here. And that means anything changed in the Git will be automatically to be detected and be applied. But here, we'll still do the manual thing. So right now, it has three. So when we sync it again, to the latest change. And then we'll see five. And when you have five, but maybe you feel, OK, this is the wrong configuration. And from the ROCD, it's super easy to roll back. You can pick the previous version and click on the rollback. And we do the time travel. So this is what we before we deploy the new version. This is the before. So that about GitOps. So basically, all the single source of truth, all the source of truth is actually in the Git. And Argo CD is trying to bridge the gap is taking the source of truth from the Git and deploying to your target cluster. And I want to go to the next demo. It's talking about the web terminal. This is optional feature. You can enable that in two, four, and feature. So what makes the life easier is every time you look at the system, and you have the big overview about what's going on about the application, and you won't do a quick troubleshooting about your pod. Nowadays, you can do click on the pod. You will see this new tab called terminal. And you can directly go into the pod and be able to basically what terminal can do. You can run the command here. You can even run a lot of things about understand the container status. So that's very handy, especially when you want to quickly figure out what's going on there. So this is, I will see the flagship product, a flagship feature in 2.4, which already being available everywhere, but you need to enable that following the steps. Cool. That's the demo part. So I want to go back to talking about additional features in 2.5. Server-side applying as the sync option. So this is the feature has been supported, I think, in the 120 Kubernetes version. And recently in RCD in the 2.5, we are going to make it available to the users. Why we have this server-side applying for three biggest reasons. One is we want the better interoperability with the animation controllers. Means you have several animation controllers are managing the same object and how you figure out the ownership who is touching which field. And the second thing is about the better resource conflict management, as I just mentioned, and especially the last button on the list is the better CRD support. And we want to do a quick demo about what does that look like and what does that mean. So we already created one application. If you look at the detail, this application we actually created without actually enable the server-side applying. Let's just quickly create another one and compare them. So we'll do the demo server-side applying before the project. We will do auto-create namespace and enable the server-side apply this time. And we are deploying exactly the same version of the app. So copy from my repository, go to the guest book. So, demo.shsa, we want to create a new namespace, create it. And I just want to sync it. And when you sync it, you can see the server-side apply is checkboxed. Basically, this is application level setting, but you can, everything, you can change it if you want. So when you synchronize, and it's the same thing, you have five parts shows up, and let me copy this field and let's compare what's the difference here. So this is the old one. Let's also bring to up to the latest version. It's also five. And this is with the server-side apply. Let's go to the live manifest of the deployment and also the demo. So one huge difference you can tell you are using the same server-side apply is do we have the last applied configuration? So here, without sync server-side apply, you will see a big length string basically talking about the whole spec of the deployment file. But when we're using the server-side apply, that'll disappear. I think that's the symptom, basically that's sort of like additional benefits to get what's the server-side apply working. But it's not the, I would say it's not the true benefits, it's true benefits about how you are addressing the ownership and conflict. But I think for most of the people being exposed to the server-side apply as a sync option is because the Kubernetes has this one megabyte limit over the every single Kubernetes object, one megabyte. So when you really, really have a huge CRD or a huge configuration map, for example, because you have this last applied configuration that makes the usable space for the object is actually just the half most of time. It's just like 500 kilobytes. With using the server-side apply, it's addressed a lot of the corner cases where you have a much bigger product or object and you have to have it. And with server-side apply, that is feasible. So that is a feature coming at a 2.5 as part of the RBCD new release. So I will talk about the next two features. We don't have a demo yet because the two features are in the very active development right now. So there are a lot of polish has to be done before we can release it. So this is a very interesting feature. We call it applications outside RBCD namespace. It's very weird, but I can tell you the reason. So nowadays, a lot of the people, a lot of the RBCD, because they are using the very advanced teachers, we call it Apple Apps Pattern or Application Set Pattern. So for those patterns to be working, you have to deploy the object to the control plane namespace. Basically, you are doing what I just did through the UI. You are creating applications by yourself and in a way is using the CRD to create applications. So the target namespace is actually the RBCD namespace itself. This is working great initially. However, while we have more and more people adopting this pattern, and for the same RBCD, you may have two different teams are using the same RBCD to manage their infrastructure or manage their applications. Just think about, I'm using the Apple Apps Pattern. You are also using Apple App Pattern. That means there will be a chance we are colliding with each other. Basically, I'm naming my application as the same name as you, like demo. So when I deploy it, I will override your application, which is pretty bad. So this is kind of a small multi, it's advanced multi-tendency problem. So we are trying to address. So what we do is, instead of you can just get in the application object in the one namespace, RBCD will allow you to configure, to monitor multiple namespace as the control plane, means that each team can configure their Apple Apps Pattern or their application set object to be their dedicated namespace. So there will be no collision possible. So additionally, there will be additional changes in the project concept. I know people understand using the app project as the isolation of multi-tendency. So that will also be able to map pink to the managed namespaces. So basically you, everyone has their dedicated lane to use this advanced pattern. So this is very- I would like to add just a little bit to that because I think you said it really well. I, the common pattern that we see people doing is they will, for example, do, when we talk about this app of apps for people that aren't super familiar with it, basically what they'll say is, anybody that adds files to this repo, this directory gets to deploy to this target namespace. And so what that means is you can configure for all your users, you basically just configure the parent app for them and then they can then just go in and all they have to do is drop a file into GitHub or their Git provider and then it will be automatically synced. And so this is a pattern that's very popular that people use all the time. It works really well. So this adding the multi-tendency so that you can just have a dip so they don't have collisions in the namespace that really simplifies the app. It is a very, very highly requested feature. Yeah, this feature is not easy. I would say it's because we have a lot of the strong assumption when we started the project. So I would like you guys, if you guys really need it, maybe try but this will take time. We will make it very mature. But this is a feature definitely for very advanced users and you guys can consider it. Cool. Moving to the next feature, I think this is a feature has been asked for a lot because there's a lot of the GitHub discussions like talking about this use case. So our goal CD, the feature name is called multiple sources for applications. So it's being implemented right now. So the whole idea is for one application, you can get your source of the truth from multiple sources as name implying. So why we need that? I think one very solid use case is for the help chart users. So when you have this help chart, it's like a library there, but you may have value files. Basically you're trying to initiate about the help chart to be a manifest, but the help chart, you want it more like a library, but you may have multiple value files here and there and you want to keep them separate. Why? And one use case is a lot of the help chart is actually published by the official, like Alexis search, Redis, they just published the official help chart. Like you can just keep your value files in your personal repository and reference the help chart as a library and combine them, then we can get deployed to ROCD. But ROCD has to monitor both side, whether the official help chart change or whether your value files has been changed. So what is the current workaround is actually using a concept called umbrella help chart. Actually it's workable, it's a workaround, but it does come in with additional overhead. Means you need a repository, you need to create this like umbrella help chart, you need to have additional repository to be tracked and to be maintained by yourself. I think that's the overhead. So this is a feature has been asked by the community for quite a while and we are taking the steps to get there and it's will be coming at two, five. And the, I think the main reasons we are going to be cautious because by the original design, when we started ROCD, we have this strong assumption is it is one single source and one destination to breaking that we will do a lot of changes around the UI backend and also how we pulling the information, how we merging the information. But after all the, after we are trying to digest all the feedback from the community and we are coming to a proposal and we are doing the implementation now. So looking forward to this feature, I think this will definitely help a lot of the help users in your day-to-day life. I think that's my topic today and thank you for listening. Yeah, thanks Hong. That was something that Sher was asking about right when you started the update. He was like, is that helm values coming to 2.5? Goodness, got you covered. Yeah. I actually used a lot the customized work around where you can use customized to reference a help chart and then you can just override the values that way or you can specify a local file. But you're right. I mean, it requires me to set up my repo and set up this additional tooling instead of just saying, look, I just want to use this helm chart. I want to do a couple values locally and do it that way. I think this was, this has probably been the most upvoted feature that people have been asking for for the last eight months, nine months, maybe the whole last year. So appreciate you guys taking the lead on getting that implemented because it's very, very desired. Yeah, credit to the community. So I think we really appreciate all the feedback. I think that's the whole reason we're making the whole Argo ecosystem to be better because most of the feature is driven by the community. Like we didn't come out with those like, for example, auto sink, self-heal. It's just all from the real use case. Like that's the practical problem people are trying to address. We are listening to those feedback and we are incorporated this a step-by-step into our feature. So I'm actually personal proud that after three, four years development, Argo is still a such a vibrant community. So I mean, believe it or not, I would say a lot of the open source community kind of like slowed down, like stabilized. I mean, like in terms of the contribution, but the reality for the Argo is it is still a separate. So that is amazing situation for us. And truly grateful that all the community members participate and giving us the feedback and even contribute the PRs. So that is amazing. Thank you, thank you everyone. Yeah, anybody listening, if you wanna get involved, you know, beyond we're in Slack and the CNCF Slack, there's an Argo CD channel, there's an Argo Workflows channel, there's an Argo Events channel, an Argo Roles channel. We also do contributor experience meetings bi-weekly and then we do maintainer meetings bi-weekly and the contributor experience, you know, they're very open. So people come from the community, they say, hey, I have an idea, could we solve a problem in this way? And you know, the maintainers are all there so you can sit and bounce ideas off if you need to and do a little brainstorm. We have a lot of people come with kind of, hey, here's a proposal I wanna talk about, here's how I wanna implement this. They collect feedback, we get through the architecting and then people can get to, you know, getting onto the implementation and helping out with it. And that focus on contributor experience is something that, to Hong's point, I think has been really successful in this project and it's one of the reasons that we have maybe the most, I think we're very close with Kubernetes for most maintainers of any CNCF project. And from a contributor standpoint, we're up there number two, number three, for volume of new contributors, people coming into the project and we have so many independent contributors from all over the world that do so much work. So with that, I mean, those are the project updates, that's the intro to the project and I think we can kind of move into the discussion portion, right? I mean, we can just kind of chat about the project and love to take questions and, you know, you've got a couple of maintainers on the call so we're happy to take any discussion and I guess, Taylor, do you wanna lead us out maybe? How do you want us to arrange ourselves? Yeah, yeah, yeah, so like Dan said, if anyone has any questions, please feel free to surface those. I just had a couple before closing some things out and just give some people time to ask some questions. When it comes to, you know, I've heard a lot of folks, I used to work at Hashi Corp and so I came with that perspective of people trying to instrument things or set up infrastructure as code for folks that have already set up workloads within Kubernetes or other cloud native workflows. Do you have any recommendations for folks on kind of how to adjust or start to implement GitOps within those solutions? Is there any, you know, unfortunately, there's no like automatically fix this, you know, for me or do this for me yet, yet, but on the path to get there, are there any tips or tricks that you have for folks to make that a little bit easier for them? We can, I'm sure everybody does. I'll speak quickly, but I noticed that most people, so when they start moving into a GitOps mindset, they're used to having kind of a manual deploy button of some kind or a manual deployment trigger and maybe they're doing the deployment manually or maybe it's running a pipeline or something like that. And so what I see a lot of people do early on is they'll create their applications in Argo CD and they'll leave their sync policy on manual so that they can manually push the button and that feels great because then they feel like they have to control. I really like to move towards getting towards just full automation. So it's, hey, once I've committed it to Git, the gates are all on Git. And so it's, if I'm merging the pull request, I'm doing it. And the biggest thing to make that happen is moving to a two repo structure. You have a repo for your application code. You've got your go code, your Java code, whatever you're rocking and rolling, you're releasing new versions, you're committing, and then you have your GitOps repo where this is defining what version should be deployed. So when you're ready to deploy, you just are making a version change in this repo over here. And that makes it really easy to control and delineate between changes. And if you have several dependency applications or things like that, you can kind of version them together. You can release together if you want to. And that takes a lot of the pressure off because I think some people, when they first come in, they say, well, okay, I'm gonna have like a deployment folder on this, but then I'm making changes. And then I don't, you know, I feel like I don't have control and maybe somebody bumps this thing on accident. Ah, you know, I don't have the permission structure I want. So going with the two repo structure, I highly recommend that. I don't know, what would you say? I mean, Hongbala, you're also Argo CD users. What would you say? From my perspective, yeah, it's basically a very good expression. I mean, we are in the CNC of landscape. So, I mean, not there's a lot of infrastructure as code concept is getting prevailing. I mean, the Terraform is a good example. I think it's still a very solid tool, but there's other tools also on the horizon. Like I think the cross plan is actually a pretty good tool. Like we use that in our company also. We basically, the whole concept is, I feel on the philosophy side is the everything tied back to Kubernetes has this declarative API. So you can leverage their declarative API to do whatever the use case you want. Like you want to manage the infrastructure, manage the application. In the end, it's kind of like a very strong contract that there, like you said in the expectation is I gave you this YAML, some controller or the Kubernetes, you deliver the result to me. As simple as that. This is not, I mean, implementation is not simple, but the contract is very powerful to make that promise. So that basically I think is people, if you are still evaluating about Kubernetes or thinking about there's a lot of the other things like you feel is complicated, but I would say go back to the very bare minimum to understand what does Kubernetes mean to your company. And you have to rethink about, do you want declarative? If the answer is yes, I think that will help you to make a lot of decisions much easier. So our CD cross plan and even other tools is just like, I would say a great solution built on top of that promise and make that promise to be achieved much easier there. But I think for a lot of people like just go back to the bare minimum, bare like the fundamental of what does Kubernetes mean to you and start with that song, to start your journey there. Gotcha. And the Intuit is a heavy user of the target CD. All the Intuit, all the services are using the target CD to deploy all the different environment. And exactly Dan said that is the same pattern Intuit also using. There is a code repo and a GitHub repo. And in addition to the GitHub repo, the GitHub repo will have like a different folders for different environment. So the user can control that what version need to go to the E2E environment, quality environment and stage environment and production environment. So which will have like a different email files to deploy into that each environment. So that will give like a user to validate every release in the different environment to get a confident to promote into the production. I like that. And even I've like folks kind of pointing out too that like you don't have to, it doesn't seem obvious all the time but you don't have to do everything all at once. Moving some other things in life might happen that way but you don't have to do that with your infrastructure. You can kind of develop that confidence and take those learning lessons as you go through it. You know, whether it's even if it's just a simple app if it's a service, something like that just to build up that confidence and kind of, you know, check yourself to make sure everything is working out the way that you expect it to. So, cool. So one suggestion coming from the community someone shared that they're, and I don't want to call them out but they're currently using a branch for production and a different branch for staging and a different branch for testing. While I think Argo as a project is ambivalent about that and I don't think we have an opinion as a project, I would say personally I actually really try to avoid that model. I really like to have everything in my main branch my production branch and then I have a folder for production I have a folder for staging I have a folder for test environments and it really like one of the big issues is that we typically have some kinds of differences between staging and production. And so modeling those differences you have to, if you're using multiple branches you then have to kind of have a policy around like don't merge this but do merge that. And that makes it really confusing and it gets very messy. When you're using folders and you have a folder for production you have a folder for staging and I use customize pretty heavily I mean you can do this with helm but I'll have for example a customization that's labeled staging and a customization that's labeled production. I know there's never gonna be a movement between those folders on those things and it's very obvious what's happening and then the changes that you're staging out and then moving to production that progression happens very straightforward. And the other thing I was thinking about as we were talking about this is most people when they first start using Argo they install Argo and they don't realize that they can use Argo CD to manage itself. So you can add Argo CD as an app and then you can actually move changes. You can stage out which version of Argo CD you're gonna be using and so that way everything is fully get off so you can bootstrap off of that and that's like a lovely pattern. Yeah. That's that, app can set up. I think that's been one of the great developments about the Argo community I really liked the app of apps pattern and just like all of those architectures and really the thought patterns behind how those got set up, seeing those evolve has been something that's been really validating and helpful in kind of figuring out and unlocking the best ways in which to use this workflow. So much appreciated. Yeah. And most of the time the people will think like aware that Argo workflow what is the best use case for Argo workflow? So the better use case in the Argo workflow it will replace that the AWS step function or Kubernetes job, everything it will be orchestrated in the Kubernetes side. The Argo workflow is providing like a very good orchestration for your job, like a retry mechanism and the failover notification, all the things it's coming by product into that Argo workflow. Thank you for the insight. I saw a couple of comments and things like that. I don't see any new questions but yeah, if there aren't any then I'm more than happy to kind of get things wrapped up just to kind of reiterate where are some of the best places that folks can find you if they have questions? I know you had mentioned, take a look at the Kubernetes Slack take a look at the CNCF Slack. Are there any other places, GitHub, Twitter, et cetera that would be good to contact you at? Yeah, I think all of us are very active in the CNCF Slack channel. So just like you should be easy using our name to find us especially we are also the team answering a lot of the community questions. So feel free just find us there. I think that's the best place to find us there. Yeah, not according to popular conception I can't just say Argo, Argo, Argo three times then you appear. Okay, we'll put that for the next person. If you stayed on Twitter, yeah, we'll probably appear. If you say Argo, Argo, Argo on Twitter we'll probably appear. But I do have the experience like people in the Argo CD Slack channel like add me, add my name three times and okay, we've got my attention so I have to show up to answer the question but in the best set of works. And when I'm wearing my Argo t-shirt in the airport sometimes somebody will tap me on the shoulder and they're like, oh gee, no. But yeah, I mean, on Twitter, I'm at today was awesome. I know Julie and Hong you're on Twitter. Bala, I couldn't find you on Twitter. Bala, are you on Twitter? Yeah, I'm on the Twitter. I can give the handle. Let's get that handle out there. You've got a handle. Awesome, awesome. I just lost, I want to looking forward to meet with all the Argo users. So please join us at ArgoCon. And if you have not, for some reason you cannot join ArgoCon, I want to see you in Detroit and at any like KubeCon looking forward to meet you in person. And I want to learn about your Argo story and share your feedback, even it's a critics. And we welcome critics about, hey, what do you don't like? And we eventually want to really making the product to be better and better. So thank you everyone. I'd like to give like our last one, like if you are new to the Argo you would like to explore the Argo product. Please come to that ArgoCon zero day into it. We have like a bunch of workshops which will start from the scratch like Kubernetes information and Argo CD 101, Argo workflow 101, Argo event 101. We'll give that very basic use case where you can use this all the tools. Yeah, thank you guys. Awesome. Well, thank you so much. A few more things before we are go today. Thank you everyone for joining the latest episode of Cloud Native Live. We really enjoyed the interaction and questions that we got from you. Like everyone said, please check out ArgoCon if you can either in person or virtually we would love to see you there or not to know that your packets are floating around and the ether would be good too. Thank you so much for joining us today. We hope to see you again soon and have a good one everybody. Thank you. Later. Thank you.