 I'm Greg, this is Wes, we're from Google. We're gonna talk about container patching today. I actually really like the gum wall, but I really wanted a Seattle theme talk, so that's why there's some gum wall in this. It is gross though. So, if we think about container patching and a really very simplistic view, you're trying to hit some sort of target usually, and so I put FedRAMP targets on here. You may not be subject to those ones. You might have PCI, no regulation, you might just have some vague notion of how quickly you wanna patch things, but the way compliance thinks about patching deadlines is a scanner detects a thing, you go patch it, and then your production needs to be done within whatever this time window is. So let's just start with a little trip down empathy lane. So imagine you're the security person at an organization who's in charge of patching these production clusters, or maybe more likely you're the developer who's been told you're now the security person who's patching these clusters. And so every two weeks you go open up your dashboard and you look at the stuff that's in there, and you've sort of already lost two weeks on your timer here because there's been two weeks of vulnerabilities accumulating there while you weren't looking at it. And so there's a couple of things in there. There's a web front end container that's missing a couple of critical patches, and so you go, hey, web team, like, can you go please patch this container for me? And of course they ignore you if you're weak, and you're like, hey, what's up, web team? I'd like to get this patch to my dashboard green please. And they come back and like, oh, actually, it's not really Alco, this looks like it might be the Django base container. So you're like, okay, cool, I know who maintains Django without org, I'll go talk to them. So hey, Django container team, it looks like you have these critical patches missing, can you patch a container please? And they go away and look at it, and they're like, oh, but this is pearl, we didn't even use pearl. Like, do we really need to patch this thing? And so you're like, yeah, look, I just want my dashboard to be green. So yes, but even better, just get rid of it so we never have to have this conversation again. And by this point you're out of your SLO, so things are not happy. But eventually they come back and like, well, we couldn't figure out how to remove it, so we patched it, I guess we'll have this conversation again. And they're like, great, okay, so now that's patched. So you go back to the web team, like hey, web team, please go rebuild with this Django container. And they're like, okay, yeah, we did it. And you're like, good. But then you go look at the dashboard and it's still the old version. And you go back to the web team, and they're like, oh yeah, we forgot to update the Kubernetes manifest. So we patched the Docker container, but we didn't update the manifest, the points of the container, now it's really done. And you go look at it, and you're like, oh, actually, no, it's still not done. And also now there's more high vulnerabilities. So we had these criticals, we were trying to patch, but there's more vulnerabilities accumulating. And they're like, oh yeah, it had to soak in QA, so we committed the manifest change to the QA branch, but we didn't put it in the prod branch. And so now it's done. And you go look at it, and you're like, okay, it's fixed. And then sort of a little voice in the back of your head is like, well, I think we have other things that run Django, but that's not your problem. So you're gonna just move on with your day. So why is this gross? Like it's not great. So there's humans at every step. There's a human has to notice a thing. Human has to go tell another human to do a thing. We don't know which layer we're patching. We don't have any inventory of containers. We're patching code that we're not using. And vulnerabilities are just accumulating faster than we can pay them down. So it just means this whole thing is slow, it's incomplete, it's not gonna scale. So who thinks the majority of the industry is doing better than this today? No takers, no takers. All right, that's what I expected. So the next slide says, so slim.ai, they ran a survey and 88% of their respondents were like, it's challenging to ensure containerized applications are free from vulnerabilities. And it might be stories like that one that I just told. So we're gonna talk a little bit about what we did for GKE containers. We're gonna talk about enforcement points we use, other options that you could use, and talk about these sort of four things, prevent, detect, fix, monitor, which is kind of our way of thinking about this problem. In terms of sort of what you can take away from this talk, we're not really talking about sort of vendor containers or managed service provider containers like GKE. We're just gonna patch our containers for you. You don't need to worry about that. So take this as a case study for where you own containers or you own Kubernetes manifest you need to update to point at rebuilt containers. That's where this sort of applies. What do we know? We've been patching a few containers, I guess a few thousand across GKE and Anthos and some other products. But we have a reasonable number of advantages. So our environment helps quite a bit. We mostly use GoLang. We use a small number of container repositories in a certain number of registries. Mandatory base images, we have a fair bit of control over the release process. So not everyone is in this situation. So we're gonna try and give examples of what we did, but then also other ways to solve the same problems if you don't have these things. Really high level view of sort of container release pipeline. So containers flow through the top here through repos and end up on a running cluster and Kubernetes manifest and other YAML bits flow through the bottom and they both sort of converge on a cluster there and then it all feeds an inventory system. And if you're running sort of one of the many scanners out there that are in the showcase out there today, you'll have some runtime detection and that'll probably give you some inventory as well too. So if you go install a thing on a cluster, you've at least got some runtime visibility of what's going on and you've got some inventory. And that's a good start. So I think to really get your sort of like a handle on this process, you need to do a bit more though. And the further left you go, the sort of the cheaper it is to catch things earlier. And so there's a whole bunch of other points here where you can do some prevention work and that's what we're gonna talk about next. So problem is here, just way too many containers, way too many dependencies. And if you have some sort of target that you're trying to hit, it's just really hard without reducing the volume somehow. So the sort of standard strategy that you have probably heard from multiple people over the years is do things like this. Standardize on your base containers. Make those containers as small as possible. The less code you have, the less patching you do. Get rid of unused code wherever you can. So if you can separate out, build and run time and have those separate, then that can help a lot. And there's really sort of two approaches to getting the small containers. It's either start small, so you start on Scratch or you start on Distralis or one of these other very minimal images. Or you sort of like install everything you want and then you slim it down using Slim Toolkit that sort of gets rid of the things that you're not actively using. And the really hard part is just doing this everywhere consistently, especially if you have a very diverse environment. So what we did was we basically standardize on Distralis. As I said, we're mostly Golang. So we have a Distralis static container that's just enough to run a Go binary. There's no shell, no package manager. It's like really minimal little container. And so we standardize on that one for pretty much everything with a few exceptions. And then another thing that we did that I think is also just a good practice if you can do it, is to get everything into a single registry or a small number of repositories in the registry. And that just helps like, you know everything sort of in one place. And it also has a good availability property too. Like if you're, say you're running in GKE, it's your GCR, Google Container Registry or your Artifact Registry that Google runs is a lot closer network-wise than something that you have to sort of reach out over the internet too. And so there's some availability advantages to just bringing that, putting all your containers close to where your production cluster is in terms of network hops. So how do we do that? The way we did this is we wrote some pre-submit checks. And so whenever a developer turns up with some YAML, we run this pre-submit check on it that looks at what image it's pointing at. So we're checking that it's a GCR or Artifact Registry image. And then we're also checking that it was built on this distriller space. And you can actually go introspect the container and like pull out the file that tells you what OS it is and then it'll tell you that it's a distriller state. There's other ways to do that, though you don't necessarily have to do that that way. You could do it at build time. You have a good look at the Docker file. You could add checks in between sort of build and putting the container into the repository. You could do the Kubernetes piece at the packaging or deployment layer. But really probably one of the most common ways to do this is with admission control. And so here we have an example of doing those two things that I said. So making sure that all containers coming from a particular registry and that they built off a particular base image. So for the base image, what we can do here is use cosine to make an attestation, a cryptographic attestation that says, hey, this container is a distriller's container. And then that goes into your repo and then you have an admission controller that says, do the containers coming in have this distriller's attestation? If so, I'll let them run. If they don't, then I won't. And a similar thing, but using gatekeeper, you can write a gatekeeper policy that says, all of the manifests that are coming into this cluster, the images have to come from GCR. So I'm just gonna demo that quickly. So what we've got here is a, we've got those two admission controllers running and they're pointing out this enforcing namespace and we're gonna try and run a container that's not a GCR container. So this is busybox that's getting pulled from Docker Hub. And so that should get denied by the gatekeeper admission controller that's gonna block that. And so, yeah, you can see here, the gatekeeper validating admission control, there's a policy called repo is GCR and it didn't let it in because the index of Docker IO is not one of the GCR repos that we specified. Okay, so we'll do the same thing again. And this time we're gonna put it in GCR. So it's the same busybox container, but now it's in GCR. So it should pass gatekeeper but it still fails the six-door check. So we're running the six-door policy controller here. And so that's looking for a distriller's attestation and because this container doesn't have that attestation, it doesn't get in there. So one more time. This time we're going to run a container that is distriller's and attested is distriller's using cosine and that one we'll go through. Okay, so just quickly recapping here. It's really good if you can look at what enforcement points you have. We like the pre-submit enforcement point, but you might have others. If you can standardize on those base containers, it really helps a lot. And if you can get down to a smaller number of registries for inventory, there's good security and availability reasons to do that. If you don't do any of these things and they're all kind of optional, then you just have to be really good at the next things we're going to talk about. So hand over to Wes to talk about detection. Cool, thank you, Greg. All right, let's get into it. So the next part of our strategy is detection. So really your vulnerability management program is only as good as your detection methods. If you're not detecting vulnerabilities, it's unlikely you're going to patch them. So what are the problems with detection? First is, which containers do we need to scan? So you might have a lot of containers, but not all of them are used. Maybe somewhere in Dev or Test or just old and outdated. Next is, which scanner should you use? So this decision might have already been made by your organization. You might already have a scanner, but you should be aware of sort of the differences in the scanners. They have different feature sets and different coverage, mostly owing to different vulnerability sources and techniques, and they may handle duplicates differently or filtering differently. So just be aware of these things. And then last, you need to know what layer has vulnerability. Is it actually in this container image or is it in some base image? And I need to go get that fixed. So our solution to the problem of which container is sort of two-fold. So on the bottom here, anytime a developer checks in a new image, we have this pre-submit check, similar to the pre-submit check we mentioned earlier, that scans it and ensures that it's fully patched. So this is kind of like when a developer is making a change, it's the easiest time to actually patch. And this makes sure anything that is staged, going to be in production, will be free of vulnerabilities. Then second, we have sort of this universe of all of our container images and we scan through source, we can determine what all of those are used in production or will be used in production and then those are scanned continuously. So as an alternative here, you have all of your images in some registry, but you probably don't want to scan all of them. So what a lot of folks do is they have something running on their hosts, a daemon set, an agent of some sort, maybe some scanning tool and this collects your inventory, your production inventory and scans those container images. So this is, like we said, a good start, that the disadvantage here is you're only catching vulnerabilities that have made it to production are already running. So next is the question of which scanner to use. So a lot of container image scanners today use the National Vulnerability Database NVD feed as well as a bunch of OS vendor feeds. So this gives you the OS system packages, but there are sort of some new developments. First, a lot of scanners are beginning to support scanning the actual application code, so your Go or your Python within the container image. Next, we've heard a lot about espos the last couple of days and as they become more ubiquitous, scanners are beginning to consume them and then kind of optionally use these VEX documents to help augment that. And then I mentioned NVD and these OS vendor feeds, but there are a number of other sources where vulnerabilities come from open source databases, GitHub, advisories, these kinds of things. Some more features here. So some scanners have this idea of base image detection either from metadata in the image or pointed at the Docker file, they could say, which container image is this one based on? And this may be useful to you. It's something to explore depending on your environment. Next, I think a big one is reachability analysis. Some tools either usually pointed at source code, they try to determine what methods are you actually using. So rather than just returning a huge list of CVEs we're returning only the ones that are in code that you're using. We can also use some information in the binary to do that and we'll get into that a little later. And then there are also additional scan types outside of vulnerability scanning, there are things like CIS, et cetera, I mentioned here. So we did a bit of an experiment. We built a Go binary. We have a vulnerable module. I've listed it up here at a vulnerable version. It's built with an old Golang version, 118, and it's on an old W base from 2021. So I had a bunch of vulnerabilities and we scanned that with two commercial scanners and two open source scanners and these are sort of the results, both fixed and unfixed. So then the question becomes which one is correct? Well, there's not really a right or a wrong answer. Almost all of them detected OS, package vulnerabilities. Some detected Go module vulnerabilities, some did Go tool chain vulnerabilities. But even within those results they kind of differ on what they choose to hide, what they choose to surface. And the key point here is that we really want to increase coverage, increase the number of things we scan, but we want to decrease false positives. So there's kind of this tension between the two. So when it comes to the question of which scanner, our solution is more than one. And if you're publishing images for consumption, for others to consume, you have to sort of assume they're going to be scanned by every scanner in existence. And we found that to be true. So we run multiple scanners to compare results and this gives us a couple advantages. So first, it shows us if there are any false positives or coverage gaps or new features that are sort of coming online. And second, it gives us an idea of what our customers actually see. So if they come to us with a scan result from some random scanner, we've had some advance notice and we kind of know how to interpret those results. So then we take all these results, compare them and that feedbacks into our own internal scanner and hopefully improves things over time. So then the next big problem in detection is noise and we've broadly broken this up into two buckets. So those things that are under the control of the user and those that are under the control of the scanner. So in the user control bucket, oftentimes we're scanning code that's just not even used like the pearl vulnerability example at the top and that's just things we should rip out. Next, a lot of times CVEs are discovered that there's no fix available. So depending on your environment, your threat model, maybe you don't care about those since there's nothing really actionable at this time. In the second bucket here, we've got a bunch of different instances of CVEs. A lot of these are just things that probably won't be patched or have no fix. Sometimes there's sort of discrepancies or there's some arguments about what the score should actually be and sometimes they just aren't even applicable at all to your environment. And these are really some areas where we think the scanners can improve. So next, going into the application and go-ling in particular. So we use primarily go-ling throughout CVE. And as I mentioned, a lot of scanners are beginning to turn up these module or tool chain vulnerabilities. So a problem we often see is that they return all of the vulnerabilities in that module or all of the vulnerabilities in that that the minor version of that go tool chain. So there's a tool released by the Go Security team, Go Volnchek. This is kind of a new experimental tool in library and it tries to help with this problem. There's a link here and we'll go into a demo of that. All right, so here I've got a small sort of terrible Go program with one single module imported. And you can see in particular I'm calling this this new server con method. So go ahead and build that. And I went ahead and I packaged this up. I pushed it to artifact registry and ran container analysis on it to see what kind of results that scanner will give us. So here it is, you can see there are a whole bunch of Go results and this is great. This is like what we want scanners to start doing this. But how many of these are actually reachable? That's the question. So we run Go Volnchek on that same binary. And what it does is it actually looks at the symbol table in the binary to determine what methods are used. And so we've kind of gone from that big list down to just a couple of two instances and you'll notice it even calls out that new server con method, the one that I called. So this is pretty cool. And this is something you could maybe incorporate in your CI CD pipeline. Maybe you could leverage that library and hopefully this is the kind of thing that scanners will start to incorporate. So to title back, when it comes to detection you need to look into the new features in your scanner. Be aware of those in new advances in coverage in general. When there's a lot of noise, look to your scanner vendor for help, either in filtering or raising pull requests, bug requests to them. And then where it's appropriate to your threat model, use these rules to sort of silence and ignore the things that aren't relevant, things that you have no intention of fixing. So next, the next part of our strategy is fixing or remediation. And this is really kind of the meat of it. So once you've discovered a vulnerability, you know you're vulnerable. How do we go about fixing it and deploying that, that fix to our production fleet? So what is the problem? Well, really it's just a complex process. And this is maybe an example flow of what it might look like. So you have a scanner, it detects a vulnerability in some image, and then you have to determine, is it actually in this image or is it in some parent, some base image. Then you have to find the owner. Hopefully you know the owner. You create a bug or a PR to them, and then once this is all complete, you've got to sort of do this process again for anything that depends on that image. So our first solution, originally we sort of tried to determine the parent-child relationships on images and the base images. But then more recently, we've taken a different tack. So as we mentioned earlier, we have a sort of a finite, limited set number of base images. So those are represented here on the left. And when they're built, they get new tags. So on the left is the oldest tag and the newest is on the right. So what we do is we have automation, and it scans the latest of these continuously. And then say we find a vulnerability in WMBase, the middle one here. So we go ahead and we automatically create a new one, make a new tag, and then we scan the latest again. And we just do this for eternity. And this ensures that when you need to patch your container, you always have a patched base image available. So there's really no question of do I need to go find the base image, patch the base image and do this whole dance. Next is the question of ownership. So again, we've got a pre-submit. What happens here is whenever a developer introduces a new image, this check will look and say, is there an owner defined for this image? So this ensures that we don't have any question of who owns this thing, who should be on the hook for doing the actual patching. So taking that earlier example and pulling out those two pieces, this might be a simplified process. So you're scanner detects a CV in an image. We already know that the base image is patched. So if there's no one available, we just update that. We know that we have an owner to send the bug to. So we go ahead and do that and then, et cetera, we wait for a build and so on until we're done. Something else I'll note here is that we also aggregate bugs at sort of what we think is the appropriate level of abstraction. So say you find 15 CVEs in your cube proxy image. It doesn't make sense to make 15 bugs to various teams or 15 times the number of deployments, hundreds of bugs. We found that when you have a large number of bugs or the larger number of bugs you have, the less likely they are to get fixed or rather, the more likely they are to get ignored. So we create one bug for the image. Once that image is patched, that fixes all of them and that's another way to sort of simplify things. So to tie it all back, if it's useful to you, track the container parent child relationships so you know what base image relates to what image. As a complement or alternative, just automate patching base images consistently so you know you always have one. It's very useful to have comprehensive inventory and track your ownership so we know who owns the image and then where possible, sort of use the existing systems you have to track these things, your issue tracker, or whatever you use. So the last part of our strategy is monitoring or visibility and this sort of cross cuts all the others. It's kind of integral to all of the rest and it's complimentary to each of them. So as we know, what gets measured gets managed. So the problems here are first, there's kind of this question of what is the holistic view of the fleet? Given a CVE, is it patched? And then further, has that patch been deployed? Next is this sort of composition question. So we know we have a CVE, but what containers actually have that CVE and where is that container used and so on up the stack? The next, inevitably bugs or PRs won't get merged or fixed and what do we do about that? So who is watching, how do we measure those things and then how do we escalate? So having those processes in place. And then last, we wanna measure all of this. We wanna know how we're doing against our timelines and then we also wanna know are there bottlenecks or are there places we could make things better? So our solution, kind of a simplified view of our solution is something like this, it may spur some ideas in your mind. So our scanner detects a CVE. If it's new finding, we create a bug, kind of like described before. If it's an existing bug, we check it against our state of SLO and if it's nearing the SLO end date, we'll say, hey, 50% of your SLO is gone. You really need to take a look at this. And then once it gets past SLO, we have escalation processes. So a couple of other things I'll mention in this slide, which aren't really depicted here, we only scan those images that are relevant so those we know are used in production. We create bugs and the priority of those bugs is based on the severity to kind of drive a priority. So then we create these bugs, we've got these processes in place to add comments and nag, and then we've also got processes for sort of dashboarding and email reporting and that kind of thing. Another optimization here might be if the detection is no longer around, you could close the bug, have some automation to do something like that. The next problem is composition. So in GKE, we built a system all in Spanner that has these relations. So we have a GKE version. We know it has one or more applications, one or more containers that have one or more CVs, packages with CVs in them. So then let's say your security engineer comes along and says, hey, are we affected by this CVE? Well, we can say yes we are. It's in this package, which is in this container, which is in this application and in this GKE version. So we're able to track using this system. Next is sort of the all up visibility. So we have a master dashboard that sort of shows the entire status of our fleet to see how we're doing. But then we've got some other metrics that we track that are super helpful. So on the top left, this is just all example data, but we track images on a number of axes. So by the number of CVs, by the team, that sort of thing. So we know which teams are getting the most, which images have the most CVs, et cetera. So in this example, we have fake image 101 and 103, have a lot of CVs. This is somewhere you might look into the prevention methods we talked about at the beginning, moving something to distro lists or slimming it down. And the top right, I think something that's often overlooked in a lot of scanners is it's important to track over time. We wanna know how we're doing right now, but we also wanna know how we're trending. So this gives us an idea of sort of that trend. And then it also tells us if we're meeting our goals. So we have these stated SLOs, are we meeting them? And you can kind of find trends in the data perhaps. Like in this example, maybe our low vulnerabilities are not being prioritized as well as they could. At the bottom here, this is sort of an example of how we tracked a life cycle of a single vulnerability. So we look at from the time it's detected to the time it's rolled out everywhere, what are, as granularly as possible, we try to say what actions took place and how much time was spent between them. And those might be good candidates for further automation or for maybe improving QA processes or release processes. Simplification. Simultaneous here for inventory. If you're on a cloud provider, they might provide some systems for that. But a lot of scanners provide this. For the kind of composition piece, Lyft has a really good article about how they used the open source cartography graph database to do something similar. Something similar to what we did. S-bombs and guac are probably the future here, as those are kind of developing. And then another sort of complimentary thing you can do is just ignore all of that and just patch everything. You find a vulnerability update. As far as SLOs go, really lean on the systems that you already use, your bug management software. And if possible, try to track the kind of commits and rollouts through your system. So tying this all back, it's very important to track your SLOs over time, not just kind of the current status. Tracking those parts of your release process can help you identify these bottlenecks to see where you can sort of prioritize investment. And then where possible, use your existing systems to do this escalation, dashboarding, monitoring piece. All right, and with that, I will give it back to Greg. Cool, all right. So just a reminder of what we talked about. So we talked about standardizing or registries, getting those minimal containers, getting as far left as possible. I know everyone loves to say shift left and loves hearing it. But we found like, that's just the cheapest way you can do it, keep those vulnerabilities out of production. Scanners get you inventory and visibility. And that record of ownership of containers is really critical, so you know who to nag to patch them. If you can, automatic patching is definitely better. I think you'll probably want bugs and ticketing systems anyway, but to help you build dashboards, but if you can auto patch and send PRs instead of just sending bugs, then that's definitely gonna be better. And yeah, ticketing systems for escalation and just really like, if you can do rather than tell, that will make a big difference. So everything we talked about here, we've got links for in the slides, the slides are up on the website. So the demo code is in a GitHub repository. You can go see that. There's actually a few other things we didn't demo that are in that repository. Yeah, happy to take any questions. We've got a few minutes. Once you've updated in your CI pipeline before you reupload to the registry and take people in like employment libraries, like that maintainer got compromised or something like that. Yeah, so the question is like, scanning in the repo is nice, but like, isn't it a good idea to like sort of re-scan in case like, I don't know, you brought in a dependency that had a vulnerability in it or some other malicious change. Yeah, like definitely. So we have like multiple points where we're scanning. I think maybe there's not too many places to do it. I don't know a bit like it. I think you can sort of the earlier you do it and that's sort of the least cost on the developer to fix it. If they're already built their release, they've already gone through their QQA and they're like about to put it on a cluster and you're like, wait, that's like the most annoying place to do it. So, but yeah, I think like the visibility in multiple places along that pipeline is useful, yeah. Yeah, question is where's the ownership information? Right now, effectively what happens is it's like, we don't have like a huge number of images. So it's just a file that's checked in and there's a pre-submit that basically looks at your manifest and if the image, your image is not in that file with an owner, then we won't let you put the manifest in. So in the future, this could maybe be something more fancy, but that's definitely doing the job at the moment. Do we have the open SSL CLI and how distrelous the image is? I don't know, I'd have to go look. I don't know if you know. I don't think so. I don't think so. We have a number of distrelous variants and some of them certainly don't. They have only, you know, temp and some Etsy files and we have sort of different variants for different use cases. Some may include GLC. To my knowledge, none of them have open SSL but I know that the open source distrelous provided by Google, some of those do have open SSL CLI. Yeah, I think we're actually familiar with this example. Yeah, so the comment was that even if you don't have a CLI in your container, then there are some maybe sneaky ways to get something that's kind of close to a CLI if you have things like open SSL. There's a whole website. I forget what the name is, but there's entire website dedicated to this premise of just like I have a small number of tools that aren't shells, how can I turn them into shells? And there's a huge and fairly surprising list of things that you can do that with. But you know, I think that's like everything that not having the shell is making the attacker's job harder. It's also making my job as a defender easier because I have less things to patch so it's just win-win anyway, even if there is like, there's still a way. Yeah, so what do we do about dependency? So I think like for us in particular and the example was like, if there's a critical somewhere sort of like deep in the chain, but there's maybe a load that will like fix that critical and sort of hidden. So I think it probably comes down to like what your scanner can tell you. In terms of like, what we've tried to do in terms of like container layers is have the least number of container layers that even inside those containers and sort of like talking beyond things that are like more complicated than just a go binary now. Like you might have like a full programming language like a PIPI, requirements.txt and other stuff. So like scanners are starting to introspect that stuff now and give you some like, some results that, so they are getting smarter in that regard. And so I think it mostly comes down to like how good your scanner is. Yeah, we're not gonna do product recommendations on scanners, yeah. There's a lot of great scanners out there. A lot of them are here today. Yeah, is there any time that you just need to go fast and you don't have time to wait on the full pipeline? Yeah, for sure. Like, I think like what we're mostly talking about here is sort of like routine, patching, that kind of stuff. But there's definitely like emergency situations where it's like all the stops are pulled out and you go as fast as you can. And yeah, so like, you know, the Google has these sort of like release processes where like best practices are sort of like rolling out over a week, rolling out zone by zone and lots of like availability, safety built into things so that if there's something wrong in that release, then we affect a small number of people to start with and we hopefully notice and then we can roll it back quickly. And so when it's a really urgent fix, then you're talking about a different playing field. And so like the security gets dialed up and the availability gets dialed down and sort of like how far you dial each of those dials is sort of depends on how bad it is. But if it's really bad, then you go really fast and you hope you don't break stuff. Yeah. Can maybe I'll add a little there. So I guess if you're familiar with the idea of a SSVC or stakeholder specific vulnerability categorization, we think of it kind of like that internally that some things require an emergency patch, some require an expedited rollout, some can just roll out as usual. So we sort of map it to those outcomes. And I'd say what we're talking about here is sort of the happy path and we sort of try to optimize that happy path. And we found that it also indirectly helps when we have like a break glass situation because those tools and presses are in place. It's just a matter of making the deployment take one day instead of seven, that sort of thing. Interesting. So the question is how often does automated patching break test or break product or whatever? Like I think for us like when mostly go and we're like on these really tiny containers. So we don't like have a whole ton of like dependencies. So mostly talking about like go dependencies breaking us. And so there have been a couple of cases where a go minor version changed some things. And so we're like pretty careful about how we like look after go minor version changes. There's also implications for that inside Kubernetes. We've recently, Jordan's been doing a whole ton of work to move Kubernetes onto modern versions of go and do that within a Kubernetes minor release. And that's a bit sort of I guess more risky than what we've been doing before, which was we'd only move Kubernetes upper version, only move up a go version when Kubernetes moved up a minor version as well. But that's not really keeping patch pace with the patches that we need. And so like in our experience, like we haven't had a time like that. I can't think of an automated patching time where we had an outage or anything. Yeah, the majority are like Greg said, these go patches. It typically when we update a container, it's just a matter of moving to a new base image that has slightly newer system packages, maybe running an apt update, et cetera. And those are pretty well vetted by the vendor. The only cases I can remember is when we moved like an image an entire OS minor version and there were differences and a specific package. And then they're caught fairly early. But I'd say to my knowledge, I can remember two instances in my time at Google. So fairly infrequent, I think. We're over time anyway. All right, thank you everybody. Thanks, folks. Thank you.