 Thank you for being here. The end of the day, you made it. We're just going to be here for the next hour. Don't worry about it. OK? It's going to take five minutes of your time. Trust me. I'm going to talk to you about how we are improving or release reliability with our rollouts at Adobe. I work on the Adobe Experience Manager Cloud Service. It's a content management system that probably you don't know about, but it's used by a lot of 1,400 companies. It's an existing Java application that has been around for several years. Using a lot of open source and a very interesting point is that customers can write their own code and we'd run this for them on the cloud. On Kubernetes, some statistics. We run on Azure. We have more than 35 clusters and probably now closer than 240 across multiple regions because people want to run this close to their customers, to people accessing it. Customers can have multiple environments and they can create new ones whenever they want. And we translate this into Kubernetes namespaces. And what I like to call these environments are like micro monoliths. So the way we scale is by giving customers their own instances, their own deployments. And that way we scale up. So it's not that we have one service with thousands of pods. It is we have thousands of services with a few pods. So we have 17,000 environments, more than that. This translates into over 100,000 deployment objects across all our clusters and more than 6,000 namespaces. We're already doing progressive rollouts at the environment level, but we were not doing them at the deployment level. So the challenges we're trying to solve with our rollouts is how to avoid issues in production when we deploy Adobe code or the customer is deploying their own code for 17,000 unique services. So for the current or the previous approach was full end-to-end testing, which is something that is very expensive, does not cover all the cases, does not scale very well. And when things fail, this requires a lot of analysis. It's a problem that in our release, it's a problem on the customer code. It's just a temporary thing that is happening. It's very time-consuming. This can make the releases get delayed. And it can impact 100% of the customer traffic if something goes wrong. So the solution, spoiler, probably, is our rollouts. We are working on the Canary deployments with Automatic Rollback. This is based on real-world traffic, that's the advantage, and real-world error metrics of using metrics that we already have from Prometheus. What are the advantages of this? Automatic rollback when we have higher rates. We have non-blocking rollouts across environments, and we can have a synchronous investigation if a customer is broken. It's automatically rolled back, and the release can continue across multiple customers if only a few of them are affected, because the blast rate is very small, and only a percentage of traffic is affected. This, in summary, means more frequent releases for us that are validated with real traffic and give us more velocity, which is always a good thing. Some things that are not so great about rollouts, our rollouts is that the migration from deployments to rollouts requires orchestration to about downtime, even when you are using workload ref. Because after you scale up your rollout, or our rollouts scale up your rollout, you have to now downscale your deployment. So this is a problem, as you can imagine, when you have thousands of services. And some things we have to work on. We have to have really good metrics. You're going to make sure that these metrics are covering both canary and stable labels on the pods, and making sure that you can differentiate between both. What happens when you have environments with very low traffic? When you don't have any traffic, it's really hard to see if something is breaking because of the rollout you just made, or is some other issue. Or the percentage, 100% errors, if you have 10 requests, maybe it's not important. And another big issue is that the rollout requires changing the RAM books, the tooling, and the training using rollout objects instead of deployment. So it takes some time to get people there. So to sum up, progressive delivery is a great idea. Argonolous is a great implementation that we chose to do this. But you just need to be aware of a few things that you need to iron out and be prepared for. So thank you, and enjoy the rest of the conference. Guys, we decided with Dan that we will not make you go to the other room to listen how we say goodbye. So basically, thank you for the whole day. It's been a pleasure. It's been very intense, so get some rest. If you want to talk Argo tomorrow, there will be an Argo kiosk down there at KubeCon. So make sure you check that out. Go to all the vendors, grab some swag. And also what's also maybe of your interest, the next ArgoCon will be in Mountain View in California, probably in October. So start preparing that CFPs. And thank you. Give yourself a big applause because you've made it through the day.