 All right. Hello, everybody. Thanks for coming to this session. I appreciate you guys picking this session. I know you have really an amazing agenda of possible things to see at this time. I mean, this conference has just so many great sessions to attend. I appreciate you coming to see my talk here. So this is about microservices and service mesh technology and CI CD and kind of bring it all together. Just to first off, who am I? My name is Brian Redmond. I work for Microsoft. My last name is Redmond. That's been my name ever since birth. I didn't change my name. I didn't do any of those things. To be honest, I've been at Microsoft for about 17 years. So feel free after the talk, kind of bring on all the traditional Microsoft jokes. We can make our clippy jokes. We can make our blue screen of death jokes. I'm used to all that and quite comfortable with it. These days, I'm a part of the Azure Global Black Belt team and I help customers deploy cloud native applications on Azure and use our Azure Cloud Platform. I've been working at Microsoft, as I said for 17 years, doing all kinds of different things, but really working with Azure since its inception. I'm a big fan of our Azure Cloud Platform. So you can find me on Twitter when I'm not at work. I like to run, so I do a lot of marathon running, so various marathon start lines, selfies and things like that you'll see. But I do like to spend my time running when I can. So when I submitted the idea for this talk, to be honest, I did not think it would get accepted. I thought, I'll throw out this idea. I know about these various technologies, maybe I'll try to piece them together and put something interesting and it'll never get picked. In fact, I also submitted a safe topic that did not get selected. But when I got the email back, much to my surprise and my shock, it was kind of, oh shit, I actually have to do this. Based on the amount of response here, I think it is something that people want to see. If anything, I'm really good at writing those CFPs and coming up with interesting sort of taglines to them. What am I gonna talk about? Well, I think we all know what CICD is. We're not gonna spend time here defining that. We do wanna make sure we understand kind of the difference between CI and CD. For this talk, we're gonna spend a little bit more time talking about the continuous deployment and automating things that are a part of our deployment process. And we'll talk about different ways that we can do that. We'll talk about concepts like blue-green deployments or blue-green testing, canary testing. We'll spend a little bit of time. If you were in the talk in the last time slot in this room, you heard a lot about that concept. So we'll talk a little bit about that. Also around pipelines as code. To me, it's really important if you have this pipeline to have that be declared in a code library so we can treat it with the same rigor that we treat our actual application source code and so forth. And of course, I'll also talk about service mesh. There's a lot of excitement around service mesh. There's a lot of different opportunities and different kinds of service mesh technology you can use in particular with Kubernetes. All right, sound good? So when I think of CI CD, I like to think of it as an assembly line. I've got these various robots in this case manufacturing something for us. This is robots before the robot apocalypse, before SkyNet and before they actually take over. They do automation. They do it consistently and they do it very well. And our CI CD pipeline is the same kind of concept. We write this pipeline, it'll automatically repeat that process over and over again the same way every time. We're feeding in a new version of our application. The problem with that could be that the application could actually have a problem in it. And what we don't wanna end up with is this automation actually deliver bad code into the environment. We do a bunch of testing. We certainly hope that we did all the testing we could. We did integration testing. We did load testing. We hope our test environment really is like production. But in the end, sometimes eventually we're just crossing our fingers. And what we don't want is to learn about bugs and issues from the customer. We don't wanna find out there was an issue by hearing about it directly from the customer or even worse hearing about it on Twitter and hearing about all the ugliness and so forth like that. So we need better ways to solve that problem for us. And I would argue obviously canary testing is something we can do to help us with this problem. And this is a pretty common technique. I think most people here at least if they're not already doing it or understand what it is. I have this application. I maybe go through some kind of proxy. I put out the second new version of the application alongside the existing production one and I route some of the traffic over to it. This comes from the concept of canary in a coal mine. I'm afraid to go into the coal mine so I submit a poor bird to go through into the cave and hope that it's safe. And so this can help us understand what's actually going on. Does the application actually work while I'm getting real production traffic? But if it's gone wrong, it's easy for me to switch back. The impact is much smaller. So this is a great technique and something I'm sure everybody here is familiar with and already doing. But what about when I have a microservices application? Now I've got lots of services. It's not quite so simple to take those same techniques that I would apply at the front end across a mesh of services that basically all of the services are talking to everything. So this problem becomes a bit harder. I need a way to do canary testing anywhere in this environment, anywhere without or across my application of microservices. So what's missing from this? And I would argue there's a lot of different sort of implementations of CICD out there. In general, we need a few things to make this better. The one thing that we need is advanced routing. We need something that can route traffic to particular versions of particular services by rule and help us be able to understand what's actually happening and do that canary testing at any layer of my microservices application. I also need something that gives me observability. So I do need to know what's happening. I need detailed metrics coming out of this. I need to understand that when traffic does hit that canary test, is it taking longer? Is it worse or better than average? How is my application performing overall and how is that particular API performing and maybe even down to source and destination, what is the actual impact of this? So I need meaningful, detailed kind of observability. And I may also need chaos testing. So like this, we all know applications, things go wrong. In general, everything that could go wrong will go wrong. It would be nice in a testing model to actually see what would happen if things went wrong, to throw problems at my application and actually see how it behaves, see how the end user would actually experience that chaos. So as an example, I won't spend as much time talking about sort of dev and test. Certainly we're gonna have some kind of a developer development environment and we're gonna do unit and integration testing. We need to make sure the code actually works. Let's make sure it does what we expect it to do. We may even do have a test environment where we throw that code out there with all the other microservices and this would be a great place to inject that sort of chaos testing. In fact, there's a lot of different ways to do chaos testing that you'll hear about from different technologies here at this conference. But that's outside of production. In the production model, we may actually wanna take an update to our code as a pull request and actually deploy that as a canary build, deploy that individual microservice out into the environment alongside the production one and modify some of the routing to get some of the traffic over to that release. And we could then score that and we could get some kind of metric about it. And I would argue that we could actually collect enough data and almost create an index or a score and say, if it's above a certain score, let's just go ahead and automatically merge it. Let's try to automate this as much as possible. However, if it's below that, maybe we need some kind of human workflow to actually take a look at the individual statistics and decide that it's okay. And maybe if it's below a certain amount, we don't do anything at all. We remove it. And then we go ahead and update production accordingly if it's passing those kind of tests. So generically, we might want some kind of pipeline like this. Well, to me, this is where Istio comes in. So Istio is a service mesh technology. As it said, it's an open platform to connect, manage and secure microservices. And there's a lot of different service mesh technologies out there. I didn't pick this one because I thought it was the best. Frankly, I like to learn about new technology. So that's one that I picked just because it was new and interesting and certainly being well adopted in the Kubernetes kind of community. What this helps me do is it does help me with the things like service discovery and routing. It provides this sidecar model to be able to control where traffic is going. I get some of these other interesting things like health checking and policy enforcement. It helps me with security. It helps me with a lot of different things. And how do we do that? From an architecture standpoint, each of my services in this picture, right out of the Istio docs, I have service A and service B. Service A in its pod has the Envoy proxy as a sidecar. All of the traffic from service A going anywhere else outside of that pod has to go through that proxy. Various route rules will be applied, any kind of other kind of delays that I want to inject, any kind of telemetry that I want to gather will all be handled by that sidecar. And that sidecar is actually the Envoy proxy that came out of the Lyft organization and is one of the CNCF projects here. So this is super handy. This means that at the control plane layer, there are Istio components that allow me to manage how those proxies behave. And in the Kubernetes case, we can actually do this with custom resources. So we can create an actual resource called our route rule and tell our sidecars how to behave and what to do. So when I think about some of the things that I wanted earlier and I want to gather statistics and I want to control routing and so forth, Istio is a great model for me to do this. Now, how does that Envoy proxy get into that pod? Well, there's a few different ways that you can do that. One, I can manually inject it using a command line tool from the Istio platform. I can also automatically inject it by namespace. So I can say, I can use a ticket advantage of initializers, which is part of the latest releases of Kubernetes and allow that that gets automatically injected in pod. So I don't actually have to change our deployment process to make sure that that sidecar is there for me. And there's other ways to actually make sure that that's there either cross cluster or by namespace or however you might want to do it. But so what can I do with Istio just to kind of summarize around that? Well, I needed advanced routing. I can do that with route rules and traffic shaping and I can apply whatever advanced routing I need to do this kind of microservice level canary testing. Very simple to get that observability. Envoy, right out of the box in the docs, you can see how easy it is to write all these interesting metrics out to something such as Prometheus so that I can pull up a dashboard and see what's actually happening with the traffic and see how long this new version is taking or any kind of metrics around it. And of course I can also do things like chaos testing. So I can inject a delay or even a complete HTTP fault and see how my application behaves. Does it handle that? Do we have the proper retry logic to handle that scenario? Now this isn't everything that Istio does. You can do mutual authentication in an automated fashion. I can implement circuit breakers. I can do all kinds of other things with Istio. These are the things that I'm focused on here that are gonna help me with this kind of CICD process. But certainly a lot more that you can do with Istio. So what about a CICD tool? Well, the one thing I would argue in this case is you shouldn't need a new CICD tool to do this. So hopefully your CICD tool can really automate some of these same processes that I'm gonna show you here. And that would certainly be the goal. If this tool isn't sufficient for what you need then there's obviously, it could be a good time to take a look at it. But hopefully what I'm describing here today can be done with what you're already using. And that certainly would be the goal. Now for my personal purposes as I'm building this demo and doing this talk, I chose to use Brigade. And honestly when I submitted the talk or frankly even when I started building my demos, Brigade actually wasn't something that existed. So Brigade was an open source project actually created by the team formerly known as DEIS that is now a part of Microsoft. And they released this open source project a couple months ago at the OSS summit that was in Prague. And basically Brigade is an event-driven scripting engine that runs in Kubernetes. And we're gonna take a look at it and get a good look at how it works. But essentially what it allows me to do is the various steps of my workflow can be encapsulated as functions inside of containers in my Kubernetes cluster. So I can then run these particular jobs that make up my CI CD process by taking advantage of Kubernetes in my cluster and containers. So if I have a particular thing such as Helm, I can just put that in a container and tell Brigade to actually execute that for me. These can be run in parallel, they can be run in serial, or some combination of that or both. And they get triggered by various events or various web hooks. By default, sort of built in, you get kind of GitHub and Docker registry, but they could be anything that you want. You can trigger one of these workflows from anything. The pipeline is described as JavaScript. So this is very simple. And I honestly hadn't written JavaScript before, very easy to actually kind of build one of these pipelines. And it is really important that this is code actually stored in the GitHub repo for the application itself. So that we know anything that changed of our automation process is actually tracked in source control, just like it would be for our application. The other nice thing is we talk about secrets that might be needed by my pipeline, like how to connect to my container registry, how to talk a secret on how to talk to GitHub or anything such as that. Brigade actually helps me with that as well. It creates the concept of a project and injects all of those meaningful configuration items as secrets in the cluster and easily takes care of pulling those out for me and using them in my pipeline. And so it's well suited for CI CD pipelines. It isn't only a CI CD tool. So if you're thinking to yourself, wow, I could use this for something different than that, great, I think that would be a great use of Brigade. So Brigade's in its early stages, I wanna say we're in like a 0.4 version release that I could be forgetting. But I think we're seeing a lot of interest already in how what could I use this for? And certainly CI CD pipelines is an easy place to start. We also had another open source project that was announced by the same team that was actually announced only a couple of hours ago that's called Kashi. And Kashi is a web dashboard that's really made to sit in front of Brigade and show me what's actually happening in the CI CD pipeline. And really this is a beautiful web-based UI that shows me a bunch of information that would have otherwise been distributed all kind of throughout my Kubernetes cluster. I had a particular pod that ran, it did a job for me. I need to find the logs, I need to find what triggered it, how do they all fit together? Kashi pulls that all together for me. What event triggered this? How long did it run? What are the different steps? It shows me in kind of a waterfall diagram and we'll get a good look out here in a moment in the demo. But it gives me a chance to actually see the results of my CI CD pipeline as well as the configuration and all the details around it. So Kashi was just announced, as I said earlier today. These are, I encourage you to go take a look at these and try them out and give us sort of feedback on them to the team. So time for a demo. I don't know timing-wise where I'm at, but I think I'm right on schedule. So my demo's called MicroSmack. I'm not gonna explain the backstory to MicroSmack if you catch me after I'll explain where that comes from. But the actual application itself, I like to do live demos. I will tell you the application itself is unimportant. We're trying to see what automation actually happens here. So I have a really rudimentary bad web UI. I'm not a UI guy. So the important thing is you actually see the front end and you see it call the API and we're gonna affect that behavior using Istio and we're gonna drive all of that with Brigade. Now before I jump over to the demo and I'll walk you through what the demo will actually look like. So what we have is we have a web UI. We have just a simple Go application that's displaying that. It makes 25 calls back to the APIs that are running in separate pods. These APIs return a color and they return some build details but that color actually drives how the HTML table is painted. And we call it 25 times not because that's efficient because it obviously isn't. We wanna generate some traffic and we wanna see if we are getting some kind of canary deployment in this. We wanna see the table actually paint differently and so you'll see that when we run this. But by default we're just all of the traffic is hitting the main one and we see some kind of blue table that I'll show you here in a moment. Now in each of those pods we have the Envoy sidecar running in all of them and that's where Istio comes in and it's controlling how the traffic is actually being shaped here in this cluster. We will take a look at Slack as developers we're gonna get details of the build over in our Slack channel and we have Brigade and our code in GitHub. So when we make an update to the code in GitHub we'll see a web hook occur. Web hook we'll talk to Brigade. Brigade will fire up the steps of my pipeline and that actually is gonna spin up pods in my Kubernetes cluster that's gonna do the various steps of this workflow. So we're gonna create a go application actually build the go application itself. We're going to build a Docker container and put it in the Azure container registry and we're gonna take advantage of Helm to actually make updates to the deployments around the API as well as some of the Istio components to control how the traffic is done. So I do all of that with Helm as well and we have one to update Slack. So when we actually kick off that PR it's actually gonna do a deployment of that PR put it alongside the existing one and route some of the traffic over to it. Now you might be saying, okay 90% of the traffic that doesn't sound like a canary build it actually sounds like the reverse of a canary build and yeah you have to kind of use your imagination we're up here doing a demo on stage we wanna see some kind of interesting activity that's kind of like watching an action movie and wondering how Bruce Willis doesn't get shot running through a bunch of machine guns. So we're showing 90% just so we can see a little bit more activity but it really would be 10% if we were doing a canary build or obviously some small amount of percent. We'll be able to see what's actually happening in our Grafana dashboard that's talking to Prometheus. If we like what we see we will merge that PR and it'll kick off another workflow and it's actually a slightly different workflow but it is the same brigade pipeline that's actually running but it's responding to a different event and in this case it's actually taking away that other version routing all the traffic and updating the production release with the new one. All right and again we'll see everything in the dashboard and so forth. All right so hopefully that makes sense and sets the stage for kind of the demo that I'll show you here right now so fun fun, let's see how it goes here. So this is the application. Again you can hold your applause for my UI design skills. The important thing to note is this table is painted it is 25 separate calls to the API on the back end and that color that we're seeing there which is officially steel blue comes from the API on the back end. It may have hit one of the different pods that's running but it's not important which one that it hit. They're all the same version and we do get some detail back on the build ID and some stuff around how it was actually built through our CI CD system. So that's what the homepage looks like. In the background I am actually curling that page here I'm looping through it just seeing so we're creating some traffic. So if we take a look over in Grafana we can actually see my dashboard of the activity here. So this is again Grafana front ends Prometheus the data got into Prometheus because we had Istio as a sidecar and I'll show you where that sidecar is running. On the top here we have the activity around the API and on the bottom we have the activity on the web application and you can see that we're seeing this red line here which is essentially all of the traffic is going to the production release. The graph on the left is the request per second and the graph on the right is really how long those requests are taking. So it gives us a flavor for what's really happening here. Now if we take a look at our cluster first of all you're going to see here that I have some pods in the background running that are Brigade. So I have a few different Brigade pods that are running. I have a gateway, a control letter and an API and then I also have the new Kashi Web UI running there as well. And we'll take a look at how those run here in a moment but just so you can see the pods that are a part of my micro smack namespace. They're running in a separate namespace here so that we can automatically inject the Istio or the Envoy sidecar. And you can see for each of those pods we do have two containers running in them. So one of them is the application itself and the other one is that Istio sidecar that's taking care of the routing that we need for this. All right. So from a standpoint of Brigade I mentioned that we had various Brigade projects. So a project ends up being described as a helm chart in and of itself. So we installed Brigade using Helm and then we install a series of Brigade projects for each, really each application that we want Brigade to do a pipeline for us. And really that project ends up being a series of secrets or configuration details. So if we actually later we'll take a look at my GitHub repo, that's where some of the secrets are and I don't have that project file in that GitHub repo very intentionally. All right. And we can actually take a look at the list of Brigade projects that we have by using this Brig client. And we can see I have a few different projects. A couple of them aren't actually real but we do have this KubeCon project here and it was spawned by a YAML file and in it is my ACR credentials, my credentials for GitHub and some shared secrets and so forth to define how to access that. So what we want to do is actually go and make a change. So I'm going to go into the dev branch and make sure I have the latest. I'm going to open up VS code and we'll take a look at some interesting parts of this application but just from a standpoint of the color here we're actually going to change the color from that steel blue to red. I'm going to get this kicked off and then we'll take a look at actually what's happening. All right, so we're going to actually kick off a PR based on that update and then I'll explain what's actually happening here. So because I kicked off that PR down on the bottom here you can see some activity actually happening in the cluster. Based on that GitHub update we kicked off a web hook that actually called into Brigade and we have this worker that's running. You could see some other pods start up that are prefixed with JobRunner and so those are the pods that are actually running my pipeline and I'll show you how those are defined but just real quick when I look at the logs on this particular Brigade worker you could see some things that are happening. It's created a Docker pod for me. It's now running that particular Docker pod and I could go in and look at the logs and see this sort of process that's running. But if we actually go back over into our code here let me show you what that Brigade JavaScript looks like. So you could see I have some events that I handle. So I have various event handlers. In this case there's a push event handler and there's a pull request one. So we're actually running this pull request run right now. I've set up some basic config variables here. This is where I've actually pulled in my Azure Container Registry credentials into my code. Very easy to pull those out of just the project object here in Brigade so that I don't have to manage or think about getting those out of some Kubernetes secret and so forth. And then I set up my Brigade jobs. So I define these four containers by creating Brigade jobs and I'm actually using some JavaScript functions here to configure how those jobs act. That way I can use these over and over again that maybe the pull request and the merge do some of the same things. I can pass them parameters. Then I create a pipeline group and I call runEach and runEach ensures that they run in serial. So one happens after the other and if anything goes wrong, it will fail and be considered a failed build. All right, and what we should see at this point is it should be done with that pull request. And let's go over and see what actually the application looks like. And we can tell it's done. Nothing's actually happening over here in Brigade. Let's see what the application looks like now that it's red. Now you may notice when I refreshed it, it took it a few seconds there to build. Notice that we're now seeing the PR and we actually should see more of that 90% of the traffic up here here. If I keep refreshing it, I should actually still see some of the traffic going to that other version. And so we've actually have deployed this sort of canary build, but it doesn't seem like it's actually working very well. In fact, if we go over to Grafana, we can see what's actually happening. So the API chart on the left up there at the top left, we can see that the traffic shaping occurred. So again, we used Helm to actually make sure that those Istio rules were changed. We do see that some of the traffic is now hitting this new release, which is the yellow line in the graph. But man, the performance of this has gone to hell. Really slow, massive amount of increase in what's actually happening. So maybe red wasn't the right color choice and maybe we need to actually make a change. Now again, we may have some sort of automated process to this. Again, we're doing this in a demo fashion to actually see everything. But we know that we do have a problem here. So we're gonna go back over to our code and our API handler here. And we're gonna change to my favorite HTML color. A guy on our team that again was at Deus, Matt Butcher told me about a real HTML color called Chuck Norris. This is actually a valid HTML color. And so I would imagine if we're having a problem, Chuck Norris can save the day, right? So we are going to update to Chuck Norris and we're gonna update our PR with Chuck Norris. And we're gonna see if that does any better. Now again, over in our cluster, we're gonna see Brigade kick off again. We can see the workers have kicked off and they're gonna update our PR with this and we're gonna see if that does any better. Now in the meantime, by doing that update, I wanna show you what Cache looks like. So I mentioned at the beginning, you saw in this cluster all the things that we're doing with containers and all the detail that we're seeing by command line. But that would be a hard way to troubleshoot builds and see what's actually happening. And that's really where Cache comes in. Cache is this web UI here that shows me all of my Brigade projects. So you saw I had three projects earlier, so that corresponds exactly to what I see here. And you can see that the KubeCon one is the only one that actually has activity in it. And you can see the various events that have occurred. They all have separate build IDs. You can see how long they ran, when they ran. You can even see a fun bug there for the one that's currently running. That one started 2017 years ago. Funny, the cause of that is the different way that different languages handle sort of date zero, if you wanna understand the background to that. Again, just released today. Fun to have the bug and see it actually in action. So if we wanna see the actual history of a build that happened earlier, we have this build history. You can see when it started. We can see when it finished. In this case, it passed. You can see that it came from a pull request. You can see there's a commit ID to it. If I wanna see individual results, you can see how easy it is to see these various steps. These are the same jobs that are defined in that brigade JavaScript that we looked at earlier. And for example, in the Helm Job Runner, you can see the actual logs all encapsulated right here in this screen. Very easy for me to see what's actually happening in that brigade process on the back end. And you can see when I go back to my main screen here, Chuck Norris pull request is actually done. That's the one that finished a few seconds ago. So we can actually jump back over and see how that went. And when I click refresh here, it certainly refreshed faster. So it's more than likely working correctly, but we still are in this canary build, if you will. So we're gonna go back over to our dashboard and take a look. You can see we're still seeing the traffic going to two different releases of the API, but we certainly are seeing performance back to normal. So Chuck Norris did, in this case, save the day, and certainly a much better scenario for our application. So it's at this point when we should go ahead and merge this pull request. And so we will, and we'll say Chuck Norris saved the day. And again, if we jump back over and we look, we should see activity in brigade again. Now in this case, we're actually seeing a slightly different event that's occurring over in this pipeline. And again, this is actually the one that's calling the push event. We configure really the details of it pretty much the same as before, but we are actually configuring each of these brigade jobs slightly differently because we're merging back to the master. And if you're paying attention here at home, we're actually saying route 100% of the traffic to prod and 0% to that really that staging or dev slot that was for the canary build. And if you see my pipeline, you can see it looks very similar to the last one, except we actually are only executing the code if it's on the master branch. And you may have noticed actually this thing kicked off if you were really paying attention when I pushed something to the dev branch. This thing did run, it just didn't do anything. And you might have caught that that happened if you were really watching closely. The other thing that I want you to see just from is from an Istio standpoint. If we take a look at the Istio components that are running in our cluster, you could see the various components that are in the Istio namespace. We do have the Istio control plane, the mixer, the pilot. There's also the initializer. That's the one that I've added that knows how to automatically inject this. I configured it and told it only inject into the micro space or the micro smack namespace. So all of these brigade pods that are starting up, they're not actually using Istio. They don't need Istio for what their work is done here. We do also have the Grafana Prometheus components as well running in this namespace. They're not required, but they're certainly useful in this case. The other thing that you should see is the route rule. So let me just make sure I get my route rule name, right? So I have a route rule that was driving some of this traffic and we've been modifying it using Helm. And you could see, again, this is a standard Kubernetes object. It's a custom resource that's been added to my cluster. And you could see we know that it's already done because we've removed that canary deployment and merged that build into master. So we actually see now that 100% of the traffic is already hitting this updated release. So odds are, when we go back and check our application, we should actually see this no longer being the PR. We should see it be that master build. And if we go back over to our Grafana dashboard, we can see now that over on the top left there, the API has now shifted all the traffic back over to that sort of master primary release. We did that at that microservice level, all right? So from a demo standpoint, that's what I wanted you to see. You know, again, we just have one API in this case. You could expand that to many different APIs and be able to control them in this fashion. We take advantage of Istio to do all the route rules and traffic shaping and we have Brigade to help us actually do our CI CD process. And again, Kashi is this beautiful sort of UI that gets me a good look at what actually happened and I can go back and see the history of these results and see what's actually happening with it. All right, so before I wrap up, I do wanna say that just before I came up here, I did publish a blog so that you can go try this out yourself, perhaps expand upon it, do something more with it, maybe make a better UI or a better front end to the application. But it is out on my blog, which is on Medium. I'll probably tweet about it later, but I did post that live. I haven't really tried it enough times to know that it absolutely works from the instructions. I know I can get it to work, but did I build the right instructions for you to get it to work? But please try it out and let me know what you think about it. And certainly, like I said, I'll say more about it on Twitter and so forth later. But with that, I have time for questions. I don't remember how long I had. I have one minute. So if you have a question, please go to the mic. He's up fast. Where are there hooks in your brigade pipeline for running system tests against that deployment versus just a canary? Yeah, if I wanted to run a particular test against it, I could run that in another job in my pipeline. So I had complete control over defining those different jobs. That might be a great place for me to do that. I mentioned earlier that I probably want something to do almost like a score related to that. I might want a job that actually gathers maybe the last hour of data from Prometheus and does an index around that and makes some decisions around that. I can put all of that in that brigade JavaScript and define that. And again, I control what those actual pods are. So it gives me the ability to build my own Docker container to do whatever I need to in that brigade pipeline. Other questions? All right, I'll take more questions later. I appreciate you guys' your attention and checking this out. So thanks a lot. Yep.