 So my name is Greg Lyons. I am a software engineer at Box, where I work on our internal platform for deploying services with Kubernetes. And today I'm going to talk about our continuous delivery model on that platform. So for those of you who aren't familiar with Box, we are a cloud content management platform, which means that we make it easy for companies and organizations to store all of their content securely in the cloud, access it from anywhere, share and collaborate so that our customers can work together and do their work as effectively as possible. So to take a quick look at what Kubernetes looks like at Box, we're currently running four clusters, three of which are in production. Those clusters are running on bare metal in our own data centers, provisioned with Puppet, but we're in the process of rolling out AWS and Azure production clusters as well. To give a sense of some of the tooling that we're using to support our clusters, we're using Tigera's Project Calico for networking. We're using SmartStack by Airbnb for service discovery, and we're using HashCorp's Vault for secrets management. At this point, we have about 80 services running in production, which really represents a pretty significant chunk of our core mission critical code that powers Box. So at this point, we are pretty heavily reliant on Kubernetes and our services running on Kubernetes to power our product for the over 80,000 organizations around the world that depend on us. So given that context, when we need to make changes to our services that are running in Kubernetes, we want to do so in a way that's safe. We're very much in the business of not breaking things for our customers. I imagine many are in that same business as well. So that's what I'm here to talk about today. The problem of how can we make changes to our services running on Kubernetes in a way that's safe and easy? So when I say changes to a service running on Kubernetes, I could be talking about a bunch of different things. Might be talking about a change to a source code. If you have an app written in, say, Go or Java, you're adding a new feature. This might be a change to some sort of third party package that you're pulling in. You might upgrade a version of some RPM that you're pulling down from the internet. This might be some config parameters like an environment variable or a command line flag. And I'm going to stop there, but really, the list could keep going on. It's not even close to an exhaustive list. The point is there are many different types of changes we might need to make to our service at any point. And we really do care about all of them because the reality is that any of these changes, if not rolled out properly, have the possibility to cause a lot of damage. So historically at Box, we've had some poor change control processes within certain systems that have led to some real serious problems. Someone's making a change that they think is pretty innocuous. That ends up either not doing exactly what they intended it to do, or it ends up affecting a lot more of the system than they expected at the time. Suddenly, the whole site is down. So that may sound familiar to some of you. You're not alone, but there is hope and there are some things we can do better to avoid that. So with our services that we're running now on Kubernetes, we're really trying to do things the right way. So when I talk about the right way, what do I mean? There are a couple of key principles I want to highlight first that are guiding our whole change control process. First one is that we want our change rollouts to be incremental. So that means that we roll out our changes a little bit at a time so that if something is going to break, we only break it for a small subset of our system instead of crippling our whole system at once. So what does that look like for us in practice? Well, most of you are probably familiar with we have a production environment that handles our live traffic. And before we make our changes all the way to that production environment, we're going to roll it out through some intermediate environments. So we're first going to roll out to a dev environment, a staging environment, and a production environment. But that's not even the whole story. Within each of those environments, we also roll out incrementally. So we do a canary deployment rolling out to a small subset of the instances in that environment. If things look good, then we'll do our main deployment that rolls out to the rest. So this whole incremental change control process really mitigates a lot of risk for us and reduces the chance that our bad changes are going to make it all the way out to our whole deployment and production. The next principle I want to talk about is automation. So that whole incremental workflow sounds great for risk mitigation, but it sounds pretty terrible if you're going to have to do all that yourself. No one wants to sit there and click a button, deploy to dev canary, sit there and babysit it, make sure everything is OK, OK, ready to deploy. Click a button to deploy to your main deployment in dev. Everything looks good. Click a button to deploy to staging. Click a button to deploy to prod. If you're sitting, you're going to have to sit there and babysit your whole rollout process. That's probably your whole afternoon, maybe your whole day. We want to be software engineers, not monkeys pushing buttons. So we want as much that to be automated as possible so that we can focus on real, more important work. So what that looks like for us is between our start of our rollout process and our finish, we want to have as much of that in the middle automated as possible. We're going to have to have some sort of probably manual intervention in the beginning. And we really don't want to be have to sitting and watching that whole process. We want as much of it to take care of itself as possible. Another benefit that automation gets us is if we're minimizing the interface for our engineers to have manual interaction with the system, we're minimizing the risk of one of our engineers accidentally making a mistake. The fact is there are some things that computers are better than us at. One of those is simple repeated processes. A computer is much more likely to accidentally forget a step in the rollout process than you are. Last principle I want to highlight is that we want our change rollout process to be declarative. So for those of you who aren't familiar with declarative versus imperative models, in a declarative model, you make a change to a system by specifying a desired end state of the system. We're an imperative model. You make a change by specifying a set of steps that you expect to achieve that desired state. And one of the benefits we get from a declarative model is that when we write out our desired system configuration, we can actually write that out as code and check it into version control. So that means we can have a versioned history of the state of our system at any point and the configuration we wrote to get to that state. So what that looks like for us in Kubernetes means that if we're rolling out our app to Kubernetes, we're gonna need some Kubernetes API objects to make that work. So we might need a deployment to manage pods. We might need a service object to load balance across those pods. We may need a namespace to run our app in. We want all of those Kubernetes API objects to be defined declaratively. Some of you've probably all seen JSON and YAML files that describe your app's config. So what we actually do is we write all of those objects in one file that we call our app manifest. And we want this app manifest to be the source of truth for whatever our app looks like on the cluster at any given time. We're gonna check that app manifest in diversion control so that, again, we have a versioned state of our cluster at any given time. So in order to make things a little bit more concrete, I'd like to introduce Kevin. So Kevin is a box engineer. This is a real picture of him. He's actually here at KubeCon. He looks a little different now than he does in this picture. Kevin is a box engineer on our storage team. So in that role, Kevin has written an app called Storage Service that basically serves to interface with our content layer. Kevin deploys that app on our Kubernetes clusters. And in this example, Kevin's gonna wanna roll out a change all the way to production. So to get a quick look at what our whole change life cycle looks like, I'm gonna give this big diagram that's gonna be a little confusing at first, so I'm gonna kind of breeze through it, but then I'm gonna go through each of the different components in depth, and we'll revisit it later. So if this whole first part doesn't make sense, hopefully it'll make sense by the end. So as you can see, we have a bunch of different clusters that we're running. We have a dev cluster, a staging cluster, and two production clusters east and west. This isn't exactly what our setup looks like, but it serves the purpose. So as we said, we want there to be one minimal manual entry point for Kevin to roll out his changes. That entry point is gonna be his app repo. So this is where his source code lives, or his Docker file lives, some other config as well. We want really that to be the point where Kevin's gonna make his change, and we're gonna start the rest of the rollout process. So when Kevin makes a commit to that app repo and pushes it, it's gonna kick off a get hook that will trigger the start of a Jenkins pipeline. So for those of you who aren't familiar with Jenkins, it's an open source automation tool. It makes it really easy to do things like build, testing, and deploying. Jenkins is really gonna be the driver of the whole rest of our automated rollout process. We're gonna kind of hand over the reins at this point. So as we said, we want our changes to be declarative, which means that if we wanna affect our storage service as it's running on our Kubernetes clusters, we're gonna need to modify the actual manifest to describe what those apps look like, or what that app looks like. So in our system, we don't actually directly have our Jenkins pipeline modify those app manifests. We actually have a layer of indirection here. We have a templating system. And the way that our templating system works is that our Jenkins pipeline is going to inject certain parameters into our templates. That's gonna generate what the actual final manifest look like. That'll make more sense in a sec when we look at it more in depth. But suffice to say for now that our Jenkins pipeline modifies some parameters in our templating system. And then our templates are written in JSONit, which is a JSON templating language we find super useful. We're then gonna run the JSONit command line tool to regenerate our actual app manifests. So great, we've made our change all the way to our final declarative spec. We say this is what we want our app to look like on the cluster. The last gap that we have to bridge is how do we actually reflect those changes in the cluster? So we've actually built a service to handle this for us. We call it kubiplier and we've open sourced this as well. And I'll talk a little bit more about how kubiplier works later. But for now, just know that kubiplier serves the purpose of keeping that app manifest repo in sync with the actual state on the clusters. So to look at the app repo a little more in depth as the first component. We have our storage service as the root name. We have our source code. Kevin's gonna have a Docker file as well. It describes how to package his app into a container image. And we have a Jenkins file. So Jenkins file is actually a really cool way to specify the steps that make up your Jenkins pipeline. And you can check that in alongside the rest of your code. So as we said, there are a lot of different types of changes that Kevin cares about. Not just source code changes, not just changes to his Docker file. He may wanna change some of his Kubernetes configuration as well. Maybe like he's running different amounts of replicas in different environments. And as we said, we want this app repo to be sort of a single point for that. And our manifests live elsewhere. So how can Kevin modify those manifest changes? Well, we started rolling out a set of parameters that Kevin's able to modify from within his actual app repo. What that's gonna look like is they're gonna be based on our environment and the track, either main or canary. So in this example, little truncated version of our file, you can see within the dev environment and the canary deployment, we may only be running one replica that only needs four CPUs. But in one of our main production deployments, we may be running 20 replicas that require more CPUs. So we want Kevin again to be able to make those changes to his app within the app repo and not really have to worry about the rest of the automated system. So as he said, when Kevin makes a change to this app, let's say in this example, he's making a source code change. He gets it reviewed by his team. He's gonna make a commit, push it to his master branch. That's gonna kick off our Jenkins pipeline. So the first part of our Jenkins pipeline is a bunch of steps that I've bundled under pre-Kubernetes. This is less interesting stuff for this talk, but I'll go through it quick. We're gonna run some tests and analysis for things like code quality, code coverage, run some unit tests, make sure it does what we expect it to do. We have some security checks as well. We're gonna build Kevin a new Docker image. So he's made a change to his source code. We're gonna need to roll out a new actual image of his app. We use Docker. Then we're gonna push that image up to Artifactory, which will be used for our container registry, but we really could use anything here. And then as I said, we're gonna start rolling out to our environments. So we now need to start pushing this image change out to our actual clusters. So I said we're gonna roll out first to a dev environment. And within that dev environment, we're gonna do a canary deployment first. We're gonna have some automated testing and monitoring of that canary. Things look good. We're then gonna roll out to the rest of our instances on dev. Do some testing and monitoring, make sure everything looks good. If everything's okay in dev, Jenkins is then gonna start rolling out to staging. And our staging rollout process is gonna be pretty much the same as our dev environment. We're gonna deploy to canary before we deploy to main. Finally, if everything looks good in staging, we're ready to roll out to production. But some of you astute audience members may have noticed that the title of this talk is continuous delivery, not continuous deployment. So different people have different definitions, but for us what that means, in a continuous deployment system, all changes are automatically rolled all the way out to production. We're in a continuous delivery system, which is what we're using. All changes are candidates to be rolled out to production, but aren't necessarily rolled out. They require some sort of manual approval. So continuous deployment requires you to have a guarantee with your service owners that their code is ready to be rolled all the way out to production at any point. We don't currently have that guarantee with our service owners, but we'd like to get there. But for now, we require some manual approval to roll out to production. Right now, the main way that we have that implemented for our engineers is they'll get a Slack notification that says that their deployment is ready to roll out to production. And then they can just click approve and then go back to whatever they were doing and Jenkins is going to handle rolling out to prod. And again, our prod rollout process looks very similar to our dev and staging as well. So this whole process that makes up the Jenkins pipeline, as we said, is defined in a Jenkins file that lives in Kevin's app repo. A lot of our pipelines for our different apps tend to look very similar because we want them to follow this whole incremental process. So in order to have everyone duplicating the same exact Jenkins file, we've actually written a Jenkins pipeline library internally that basically allows people to reuse some of these common steps like deploying to a certain environment as building blocks so they can get up and running pretty quickly. So I very much glossed over the actual deployment mechanism. Thus far, I've pretty much just said, okay, we deployed a canary and deployed a main, so now we're going to look at what that deployment mechanism actually looks like. As I hinted at, it has something to do with templates and manifests, so I'm going to try to make that a little more clear. So if we work backwards, we have Kevin's app deployed in a bunch of different environments, dev, staging, production, and there's going to be some configuration that differs across those different environments. So for example, again, as I showed in that example, we might have different replica counts in our different environments. So that means we're going to need different app manifests for each of those environments. We're going to dev one, staging one, prod one. These are all written in JSON, but a lot of that config is going to be shared and duplicated, and we don't want to have Kevin write that same config every time. So what he's going to do is instead we're going to write a template that's able to kind of centralize a lot of that shared config, and we're able to actually inject parameters into that config, or into that template. So as you saw earlier, I had that set of parameters like the replicas, CPUs. So these parameters here are going to be a super set of those ones that exist in Kevin's own repo. So these are going to be pretty much everything that differs across environments, because there may be some things that he can't even control from his repo that we want Jenkins to be able to modify. So as I'll explain in a sec when I get back to Jenkins, but for now we write our app template, and we inject some parameters that generate the actual state that results in the manifests. We also have a lot of config that's shared not only across environments within an app. We also have a lot of config that's shared across our different apps. So many of our apps run the smart stack as a sidecar for service discovery, and that sidecar config looks very similar for a lot of our apps. So we've actually moved that out into a shared library, so our templates are able to reference those shared libraries and our service engineers don't have to write that same code every time. So that whole template system kind of makes up the left half of this setup. What our Jenkins pipeline is gonna do, say for example, we're rolling out a new image tag and we're rolling it out to the Dev Canary environment and track. Jenkins pipeline is going to write the new image tag into our parameters for that environment, and then Jenkins is going to run our JSONET command, taking the template and the parameters as input, and is going to regenerate all of our actual manifest files. So now that the parameter has been changed that's being injected, our dev app.json manifest will now have the new image tag. So to make all this work, we actually put all of this under one big repo. We have one big repo called deployment config that you can see is divided into three main subsections. We have an app section. This is gonna be where our templates live. So pretty much the left side of this. We have under our storage service section, we have our app.jsonnet, which is our template, and then we have the parameters that are injected. We have a library section that's gonna have things like the shared sidecar code. So we have, as you can see, we have a smart stack Libsonnet template. Libsonnet is just a convention within JSONET that for shared library code that's only imported, you wanna have that with a Libsonnet extension. Doesn't really make much of a difference. And then finally, we have our release subdirectory, and this release subdirectory is what's gonna have all the actual app manifests that are generated. So we don't wanna modify those manually. We just want them to be regenerated based on whatever is specified in the templates and the parameters injected into those templates. So this release directory really serves as the whole state of our system, and we want that to reflect what's on the cluster at any given time. So one thing we get from having these all in one repo is that when we make a change, that change, we can see both the change that was made to the parameters that were injected on the templating side, and we can see what it actually resulted in the change to the cluster state. So if you look at the bottom, the files that were modified here, I hope people can kind of see that, but we've modified both the parameters file under the storage service section of apps. So we've modified a parameter that was injected, and that's also resulted in our app manifest being modified under release. So our commit history in this repo is gonna have the changes that were actually written by Jenkins and how it actually affected the cluster. And we also have some links to source code change in the Jenkins build associated with this change. So now that we've modified our manifests, again, we want those to reflect the actual state on the cluster. How does that work? Well, as I said, we've written a service called kubiplier to help us do that. So kubiplier works as it runs on our cluster, just like any other service. So as you can see here, we have an API server, we have kubiplier, we have our storage service, and I've also thrown in a couple other example services that we run, upload proxy and download proxy. So each of these services, we wanna have defined by a declarative app manifest that lives under deployment config. And the way that kubiplier works is kubiplier is going to continuously pull that repo for changes. And if kubiplier sees that there has been a change to one of the files, for example, storage service now has a new image tag, kubiplier is going to run a kubectl apply command to the API server. So for those of you who aren't familiar with kubectl apply, it's basically the single declarative operator that says here's what I want my object to look like in this file and the API server will handle making it a reality. So if we've updated our image tag, API server will handle rolling out the actual new version of storage service with a new image tag. So I mentioned that kubiplier is continuously pulling for changes, but we also have it running on a loop so that even if nothing changes, there's been no commits to deployment config. We have kubiplier run, do a full run through all of the app manifests under the repo. Every x number of minutes, we do something like 10 or 20 depending on our environment, but essentially what that gives us is a guarantee that whatever's actually in the repo is whatever's state of the cluster at a given time. So if someone goes in and for example, manually modifies one of their deployments in the cluster, they might use the kubectl scale command, change the number of replicas to be different from what's defined in their app manifest. We always want the app manifest to be the source of truth. So kubiplier, even though no changes have been made, it'll periodically run through that whole thing and it'll overwrite the manual change that was made and re-synchronize the state of the system with the actual, with the repo. So again, this gives us a better guarantee that whatever's in the repo is a real version history of whatever the state of our system is. Again, kubiplier is also an open source project. It's available on GitHub, so I'll have some links at the end to that if other people might find it useful as well. It really allows us to kind of abstract away this process and we get to use sort of the repo as our real interface for interacting with the cluster and not have to worry about actually manually running those kubectl apply commands themselves. Some of you might be, I'll get to questions at the end, happy to talk more after. Some of you might be more familiar with push sort of models where when you make a change, your CI is just gonna run the kubectl apply command itself. As I said, one of the things we like about this model is that it gives us kind of a stronger guarantee that whatever's actually in the repo is whatever's actually in the cluster at a given time because we're continuously looping through and ensuring it's synchronized even if no changes have been made. So to zoom out for a little bit, we have each of our clusters is gonna be running its own instance of kubiplier. Each of those instances is gonna be looking at a specific sub-directory within our release directory that has again all of the app manifests for that specific environment. So one of the really cool things that this gets us is that if for example we have an outage in one of our whole clusters, so one of our production clusters goes down, we have a versioned history of the entire state of the system at any point, so we can easily spin up another cluster to replace it, run a new instance of kubiplier and point it at the same directory we were using before and we can spin up all of our apps exactly as they existed before. So when you don't really have a centralized store of your state at any time, say each of their apps only roll out with their own pipeline and apply their own manifests, your cluster goes down, it can be hard to make sure you kick off all of the pipelines to bring up their instances exactly as they were. So again, this gives us a pretty good disaster recovery model for that sort of thing. So to zoom back to what our change life cycle looks like, hopefully it'll make a little more sense now. We have our app repo where we wanna make our changes. It's gonna kick off a Jenkins pipeline that's gonna handle rolling out to our different environments. We're gonna roll out to those different environments by modifying parameters injected into our templates, then going to use our templates to regenerate the actual app manifests that describe what we want our cluster to look like and then KubeApplier is going to handle making sure that those app manifests our reality in the cluster. So as a challenge to this model, I wanna introduce Joe. Joe is also here at KubeCon. Joe is a box engineer on our service discovery team. So Joe, as I mentioned, a lot of our apps run SmartStack as a sidecar. So Joe actually deploys that sidecar shared library and suppose Joe has to make a change to all of our apps running in production, say there's a vulnerability in some version of the HA proxy or something and he needs to roll that out within our SmartStack shared library everywhere at once. So if each of our apps have their own pipeline that they're using to deploy to these different environments, how can Joe make sure that he can roll out a sidecar change in the shared library to all of those changes at once? So in our current system, as I said, we have everything centralized under that deployment config repo. So we still do have an entry point for Joe to essentially make his change in that centralized shared library that everyone pulls from. And then Joe can have his own pipeline that actually regenerates everyone else's JSON files, regenerates their app manifest to pick up his change and he can roll out all their apps for them. We don't really love that system because it kind of takes away the control from the service owner themselves if they're expecting that all their changes are being rolled out through their pipeline and they're modifying that. Suddenly we're deploying their service without their knowledge. They might not have any record that the fact that their last time their service was deployed is different from whatever they have in their pipeline history. So what we'd really like to do is have our app owners actually able to kick off all of their pipelines when we need to make a change and roll that change out through their pipeline. That's gonna require us to be doing continuous deployment. So we want Joe to make a change to that shared library and then have a big red button where he can kick off everyone's pipelines all at once so that they pick up the change. They have their own history of that deployment in their own pipeline. So that's one reason we'd like to get to continuous deployment is so that we can roll out changes to all the apps all the way to production at once. So to sum up some of the benefits we've seen from this change control process. We have a greatly reduced risk of production outages. So because we're rolling out things incrementally it's a lot decreased risk that something that's bad is gonna make its way all the way out to our main production deployment and cause some real problems. Because we have a versioned history of the state of our system at any point we have a really good audit trail for when we need to debug things or roll back or someone makes a change that ends up affecting the whole system. We can kind of roll back our entire state to whatever it was before then. We've increased our engineering productivity because our engineers can focus on more important problems than just sitting and babysitting their deployments waiting for it to roll out all day. If we do require manual intervention we have it come up in a easy way that where you can get back to doing whatever you were doing. Lastly our engineers have greater confidence in making changes so changes can be scary. But if we give them a process to make them feel confident in deploying their code we want our engineers to feel innovative and feel like they're able to make some changes and build some cool things. So that's all I've got. I've got a couple links here if you wanna read some more about some of the blog posts we've written on this topic. And we're also gonna be working on some more content blog posts and an open source repo as well to kind of make a more practical example of what this looks like outside of box. So that I'm happy to open it up to any questions that people have. Yep. So the question was about services that have state in running in Kubernetes. We haven't come across something like that. We aren't running our databases in Kubernetes yet. Hopefully we will. What it means is you're saying it's bad. Got it. So you're saying a change to your schema somewhere else ends up affecting your Kubernetes. You've got a coordinate base at the same time. Yep. I don't have a great example for that specific case, but there are some similar cases that we have tried to solve. So for example, say you're changing an environment variable that's used by your service and your source code. You're gonna need to make that change both in your source code and you're also gonna need to change the environment variable that's passed in in the Kubernetes config. That's one of the reasons we're trying to bring those parameters into the app repo so that we can actually do an atomic commit that changes both the source code and the environment variable that's passed in. As far as a change to a database schema that we don't really have any insight into in our Kubernetes config, that one's pretty tough. I don't think we have a great answer for that in our current framework, but happy to talk more after about it. You had your hand up earlier, so. So if someone makes a manual change to one of our services running in our cluster, so we do have some access control so that we can limit what people can change, but if someone does make a change to something they do have access to, we do have some auditing in place as far as like kubectl commands that are being run in the cluster. We have logs of that available. Again, for the most part, we'd like things to be rolled out through our declarative process, but we have tried to, we do have some auditing in place. You can't catch everything, you can't stop everything, best you can do sometimes just have a record of what was done. For the smart stack sidecar specifically, we do use config. I was just saying for the end. Yep, we do have config maps. We do have secrets. Secrets are kind of a tough one to roll out with this process and that you don't really want to specify secrets declaratively in a file as they're exactly expected to look. We were doing some work on an encrypted secrets project that we kind of lost track on, but hopefully can get back on board and maybe open source. Essentially we want you to be able to declaratively specify your secrets in a way that's safe. For now, our secrets are largely configured manually, but we do have config maps as well that we're using, so yep. Does that answer your question? So smart stack, we use as our sort of service discovery mechanism of which services need to talk to which other services. Smart stack, we use some config maps to pass in the information that it needs, but our smart stack config is also, as I said, there's some code that's shared, but each of the apps are gonna have some things that differ among their source. Our smart stack shared library isn't just like a drop-in function that does everything for you. You're gonna need to pass in some information about which other services you need to talk to. It just kind of abstracts away some of the shared code. I don't know if that answers your question. I'm happy to talk more afterwards about it. Yep. So the question was, can Kevin or Joe modify their Jenkins file so that they can deploy directly to production? So we do have some restrictions on our actual Jenkins machines as far as which, we do have some restrictions within our Jenkins infrastructure as far as what you're doing and we don't want you to have a Jenkins file that can just arbitrarily do whatever steps it wants. That's one of the reasons why we really focus on having that pipeline library that kind of gives people building blocks that they can use. If, I don't know if you're talking about like a malicious case or like an accidental case, again, we wanna try to limit the accidental cases where people are doing something that they shouldn't be doing. We wanna make it easy for them to just kind of follow process we've laid out. So as far as like, what's the alternative model that you're thinking of? So, this is not really, I don't really have a great answer for that. I don't think there's a way that we've leaned strongly one way or the other. Yeah, I don't have a great answer for that. I can try to find out more if you're curious. Yeah, sorry. So the question there was, why do we have our multiple environments specified within, sorry, distinct within a file system like as subdirectories, I think versus having different branches like a branching model where we have a dev branch and a staging branch. I don't have a great answer for that. I was not part of that historical choice. Yep. So our access model, I did very much gloss over essentially the way we implemented it a while back when there wasn't a lot of, we've been running Kubernetes for about three years now since like 0.11. So some of these changes are historical changes based on the way Kubernetes was in early states and we haven't totally caught up in some ways. So our access model right now is we essentially are deploying all of our apps to their own namespace when we're able to control the access of which groups have access to which namespaces. So as far as compliance, we try to match those, the groups that have access to certain namespaces with an LDAP system that we have. And so we want to ensure that, we have some sort of production, auditing system for who has access to what. And again, we also, in order to make those changes, to make changes to access control, we need to balance the API server. So that's something that's going to require the attention of our admins. So our admins are aware of every sort of permission change that has to flow through. Yep. So the question was, are we seeing the need to move our app manifests into the service repos? This has been an ongoing battle for us. We do like the idea of having our service owners able to manage their configuration alongside the rest of their code. It does get a little dicey for some of the shared libraries and the things that we want to be able to generate from a centralized place. So we want our smart stack developers to have some kind of centralized entry point that everyone pulls from. So that's why we've started to pull out some of those parameters into the app repo without pulling the whole canonical source of truth manifest. Another thing we get from having all of those app manifests in one place is that's like our single source of truth of what the cluster looks like. If we have those kind of dispersed through all our different app repos across our system, again, we don't really have a clear picture of what each app is running at any point. So it's possible we might be able to just move the whole app manifest out to someone's repo. We'd still probably have a copy of it somewhere that has everything centralized and have that kind of be the canonical source of truth. It's an ongoing debate for us. That's pretty interesting. I didn't get to talk too much about here. But yeah, again, we want to give our service owners control over those config changes, but for now we kind of also like having that config all in one centralized place. Yep. How you address potential differences between the state, the cube, applier, Cs and other components that can change properties of deployments like a horizontal and pod autoscaler? Yeah, so autoscaling is a good one that we haven't had to deal too much with. And again, for the most part, we want things to be declarative when we can. And we haven't really solved the autoscaling problem yet. So that's gonna be a tough one. The most part we want to avoid people making manual changes when they can, but there are some things that we're gonna have to solve and there may even be some deficiencies kind of in the cube API to support both this declarative workflow and things with the support changes to the declarative workflow and changes with things like autoscaling. So we're gonna kind of keep following the community efforts on that sort of thing and hopefully come up with a really good solution when we come across that problem. So sorry, I had a time for more questions, but I'll be right outside if anyone wants to talk more. Thanks again, everyone, for coming. Hope you learned something and have a great rest of the conference.