 Hi everybody, welcome to GitOpsCon. I'm excited to share my session with you. And before we start, I just wanted to say that if you ever plan on using a green screen, don't get a green cast. All right, so with that, the topic today that I'm going to be covering is keeping progressive delivery in concert across multiple microservices using Argo Rollouts. So the topic is really about how do you do progressive delivery with multiple applications working together at the same time? Does this require a complex service mesh or anything like that? That's what I'll be covering. And we're going to do it using Argo Rollouts in a declarative way. We'll show you how the GitOps aspect comes in. And you can do all of this declaratively, and we're going to get into it. Just to introduce myself, if you don't know who I am, my name is Dan Garfield. I am the co-founder and chief open source officer of an amazing company called CodeFresh, where we do software delivery, where Argo maintainers, I myself am on the Argo project and work as a maintainer primarily on Argo CD and a little bit on Argo Rollouts. And then you can follow me on Twitter. At today was awesome. I'd love to hear from you. I'm also part of the GitOps working group and helped co-found that group and author the GitOps principles. So this is kind of a conference that I love to see a lot of people at because it's one that's near and dear to my heart. So with the introductions out of the way and hopefully my bona fide is somewhat established, let's get into the topic. If you're not familiar with progressive delivery and Argo Rollouts, I'm going to explain it just very briefly. But Argo Rollouts is part of the Argo project and it basically has a controller. And that controller sits on a Kubernetes cluster and it controls what's called a rollout object. Now a rollout object is essentially just a deployment object, a Kubernetes deployment object with a few extra values to allow it to function for progressive delivery. And the way that the rollouts controller works is that rollout will contain the replica sets necessary for canaries or blue-green deployments. And then that will service up traffic to a service and then an ingress. And then you can also run analysis as steps as part of your analysis template in Argo Rollouts to check the health of those different replica sets that you're deploying. So this makes it so that you can do progressive delivery using anything. You can use just an IngenX front end. You can use an Amazon ALB. You can use Istio if you want to, to get more advanced stuff. But you can do progressive delivery today if you just have a deployment using Argo Rollouts. It's dead simple to use and it's become incredibly popular because of this. So let's step through a simple blue-green deployment. And I'm gonna show you how we can start orchestrating multiple of these together. In a simple blue-green, you have, and this is gonna be the same similar story for canary. Everything we do today, we're gonna focus on blue-green because it makes it easier to demo. But from the perspective of how all this stuff works, it's gonna be the same if you're doing a canary release. Now a canary is when you deliver a little bit of traffic to your audience. Basically you expose your new version to a small portion of your audience at a time and then increase it. In a blue-green, your audience is getting right now the prod version, right? So this is the current deployed version. And then we're gonna deploy a new version. Both of them are gonna be running simultaneously, but our traffic crucially will not be getting exposed to this new version yet. We can run tests on it at this point, make sure everything looks good. And when we're ready, we can run the test using an analysis template in Argo Rollouts. When we're ready, we can switch all the traffic over to the new version. And that prod version that we had previously is now available as a hotswap if we want. So we can actually switch over the traffic and then switch back to it if we need to. Or we can just more realistically, most of the time we'll just end up scaling it down. Okay, so that's pretty simple, but many services aren't that simple. Let's look at a simple blue-green deployment release between multiple services. You can see on the left-hand side, you've got your backend and front-end. So these guys over here, I can't point that far, over there on that side of the screen, your backend is on version one. Your front-end is version one. That's what your prod is running. And, oh, I've incorrectly labeled a portion of users receive Canary here. No worries. We know we're talking about a blue-green. Now I deploy a new front-end that is version 2.0 and it turns out that new front-end actually may require a new back-end as well. That's very common, right? Many of your developers, you know that if you're building a new feature, very often you need to build features, make changes to both a front-end service and a back-end service. And they need to be talking to each other at the same time. And if you were to send traffic from that new front-end to the old back-end, it may cause an explosion. This would be a problem. And if that happens, it's game over and you decide progressive delivery is too hard, you hang up your hat and you toss everything in the garbage and you say, you know what? This cloud-native stuff, it's too hard. I'm gonna go into basket weaving, maybe become a carpenter, live that simpler life. Okay, let's hope that doesn't happen. So let's do something different. How do we fix it? Well, I'm gonna show you three scenarios here. And the first two are gonna be the modern scenarios. And the third one is going to be what we would call legacy application, an application where things aren't versioned between services. Now you're welcome to follow along with these examples. My colleague of mine, Kostas, many of you know him as CodePypes on Twitter. He's giving another talk at GitOpsCon, if you haven't seen it, definitely go check out his talk as well. It's very good. But he put together these examples for us. So go ahead and go to that GitHub link or you can scan this QR code to find it as well. And all of the project files are here so you can replicate this yourself. For our application, we're gonna be using an application that has this very helpful front-end that displays what version it's talking to. So the front-end is listed as 1.0, the back-end is listed as 1.0. We know exactly what is being exposed to our users so we can test things that way. In a modern application, let's explore that one first. So in this scenario, in step one, we've got our status quo. We've got a version 1.0 front-end, a version 1.0 back-end, and the users are being exposed to version one across the board. In step two, we deploy version 2.0 of our back-end. We run smoke tests and QA against that version, but the users are still only being exposed to the version one front-end and back-end. And finally, in step three, now that we validated it, we move in that new back-end, the users are now experiencing version one front-end and version two back-end. And then in step four, we deploy a new version of the front-end to expose the service and we can run tests against that. And finally, in step five, we deploy that as well and now everybody's getting version two. So that's deploying the back-end first. And in this scenario, it's a modern application. And what do we mean by modern application? We mean an application that is able to talk to different versions. So typically, you wanna version your API, right? You wanna have your API version so a front-end can talk to a back-end and it's not gonna blow up because it's missing some key there. So that's ideally if you're following 12-factor apps or those kinds of things, that's gonna be the situation that you're in for a modern application. First, for our second scenario, we're gonna look at a modern application where you do the front-end first. And this one's almost the exact same except in step two, instead of deploying the new back-end first, we deploy the new front-end first. And then we follow up with deploying our back-end second and then finally everybody gets the same version in step five, version three and version three on both sides. Now this modern approach requires careful architecting and careful planning. But the problem is that there are a lot of legacy scenarios. So like I mentioned, oftentimes features require changing two services at the same time. All of you know that, all of you have seen that and you may not be able to test them together unless they're both deployed. So our modern application may not reveal the integration problems that could come up. Oftentimes, for legacy applications, your services don't have versioned APIs. This is very common for people just coming over to Kubernetes, just deploying a lift and shift for what they previously had. And maybe you have integration tests that are missing for some reason. The other key here is that we may need to update configuration and not just binaries. I didn't mention this, but within ARDO rollouts, we're really looking at the rollout object, right? And so that object needs to change. So if you just update a config map, it's not going to deploy a new version of our application. You actually need to find a way to do that. So these are all scenarios that we're gonna solve today using this method. So the scenario that we're gonna cover, and we'll do this one as a demo and we'll step through these, is using a legacy application where we wanna match versions. So this one starts out the same. In step one, we've got all traffic matching. They're getting version one of the front end and version one of the back end. In step two, we're gonna deploy our new back end service, version two, and we're gonna run tests against it just like we did previously. Except now in step three, instead of exposing that to users, we're actually gonna orchestrate these rollouts to happen together. And to do that, we're gonna deploy a new version 2.0 of the front end, which then we can test all of that integrated together. And then finally, in step four, we can actually switch over all the traffic and scale down the previous running versions and to step five where everything's deployed and all the old versions are gone. So this is the one I'm gonna focus on today. I'm gonna show you those techniques. So to do a legacy application, we're gonna have to have a couple of things right off the bat. The first is that we need a configurable URL to tell the front end where the back end is. And here you can see a simple bit of code on the left. I think this is what Python that is grabbing a back end host as an environmental variables to the application and then telling it where those service requests are supposed to go. And then the second piece that you need is that that URL should be set by a config map generator. Now, if you're not familiar with config map generators, this is a feature of customize. Now I think you could accomplish something like this using Helm, but it's built into customize so that's what we're gonna focus on today. You can see on the right hand side, I have a config map generator. And that config map generator is going to dynamically set keys for me. I'm gonna show you what I mean. So the configurable URL, straightforward, we talked about that. Now config map generators are really interesting. The way that these work, and I know this is a little bit small, but hopefully you can see it okay, this config map generator is gonna take all my values. And when I deploy this, and you can see that I'm referencing these keys that I've set up for my config map generator, and it's referencing the config map name, My Settings. Now, when I actually render this, when I do a customize apply or a customize build, it's actually going to generate a unique key for that config map, and it's going to set that as the name of the config on my deployment. This is key, or my rollout. This is key because this is going to allow me to trigger a deployment, a progressive delivery deployment, either a blue, green, or a canary, even if I just update my config map, because it actually triggers a new version of the config map to be deployed and associated with my rollout. So this allows me to do progressive delivery for just config changes. So already, that's a little bonus in this talk for something that you can do. Okay, now to load those into rollouts, it's gonna be the same thing, and we have a blog post you can follow here that Kostas put together, you did a great job on it. But to do this with rollouts is gonna be the same thing, the rollout object, like I said, it's just like the deployment object with a few extra values. Okay, so let's get into the demo, and I think this is gonna make sense as we go along. The first thing is, let's look at our application, and you can see that I've got the service running here, I've got a, this tab right here is showing me my live version that all my users are seeing, and this is showing a preview service that is available if I needed to preview changes. But I wanna orchestrate these together. So like I said, the first step is going to be to update my backend configuration. So to do this, and I'm gonna step through this in the configuration files, but I'll explain how you do this from a get-off's perspective as well. So the first thing I'm gonna do is, I'm gonna change the, for my backend version right here, I'm gonna change its version to version 2.0, and then we can actually deploy this. So let's do, let's apply that. Okay, now once we do that, it's going to show me that pretty much everything is unchanged except for these config map. You can see that config map, that rollout, sorry, sorry, that rollout has been updated. So now I'm gonna get the, let's see, let's do Argo rollouts, and let's get not the front end, but the backend. Okay, so you can see that now I have this preview service available, right, that's right here, that this is now available. And if I look on this version, my currently active version and my preview version, you can see that neither of these have updated, right? Cause I actually don't wanna expose this yet to my users. So the next step now that I have this preview version is available, I'm going to actually deploy the version one, sorry, version two of my front end. And for this one, I'm going to change the backend host from pointing at active, which is my stable version, and I'm gonna point it at preview. So it's gonna go to this other service that I've exposed. And let's go ahead and apply that. Okay, now let's look at our rollout for my front end. And you can see this has now deployed a new preview version that is available, okay? Now, if I look at my user's perspective, they're still getting one dot-o, right? But my preview service is showing two dot-o and two dot-o together. So now these rollouts are happening together and I'm ready to finally promote them. So let's go ahead and promote my, we're gonna do it basically in reverse order. We're gonna promote the front end because the front end is pointing at the right version of my application, right? So now I'm gonna do our rollouts. Promote and we're gonna promote my front end first. And at this point, the users are gonna be getting the new version of my application. So if I go back and look up here, see my users are now exposed, but the old version of my backend is still running. So I'm gonna go ahead and finish promoting that as well. Okay, and now that that's done, both versions will be the same. And if we look at our rollout, you can see that it is getting ready to break down and shut down those old versions of my service. Okay, so that's how you do it. That's how you orchestrate two services being deployed together. Essentially, it's all in the config map and configuring which backend service it goes to. I think it's worth mentioning that in this case, there is one final step that would be required, which would be that you would typically reset this to the new, the stable prod service once you're done. And then you just deploy that and promote it. So there's that kind of additional step. So how do I take this and make it work from a more GitOps friendly fashion? I've just stepped through the technique manually. Well, we can actually easily automate this. So I'm gonna show you an example where we do this using a CI CD pipeline. Here I'm using CodeFresh and I've got some of manual approval steps that are here. But essentially what I'm doing here is I check out my repo and I can deploy the new backend and this would be making a Git commit or doing it manually. But typically you do a Git commit here to step through the process for deploying this. So at the moment, within Argo rollouts, there's not like a single spec standard that you can say, hey, describe all these steps and throw it at this and let it happen. There are a couple of projects that are being worked on to make a fully declarative GitOps spec so you don't have to use a pipeline to manipulate your Git repo. CodeFresh has one, CodeFresh environments that we just announced at the last KubeCon and I think there are some other projects that are trying to do similar things but that's just kind of a new frontier that's happening. So right now today, for most users, it's gonna be doing something like this with a pipeline. Okay. Now there are some warnings and caveats that I wanna mention as we wrap up here. First of all, using a config map generator with Argo CD and Argo rollouts, this is a really great point pointed out by again is Alex Dunn. If you have max unavailable set on replicas for your available replica set will be deleted. So what will happen is if your rollout fails and the old replica set tries to scale back up, the old config map is no longer available because it's been deleted, right? Cause we deployed a new one. So to avoid this issue, you need to add a prune last equals true annotation to your config maps in Argo CD. And you can read about that in this blog post that Kostas wrote and the discussion that happened there. So that is a little caveat. You wanna make sure that you do that a little nuance that's important to have. And with that, we're gonna come to a close. I hope that you were able to follow all that. I know that was a ton of content and walking you through that was a little complex. But hopefully you can see how you can actually do progressive delivery across multiple services at once. You don't need to wait to rearchitect your application. You can actually start doing this stuff today. Now I'm gonna be hanging out in the chat, so feel free to ask questions. And I would love to connect with any of you afterwards. Of course, you can find me on Twitter at todaywasawesome. And with that, I'm gonna sign off.