 So Ben, what do you think of TANOS? So TANOS is super cool. There's a bunch of really interesting features that it has. And basically, the way it integrates with Prometheus 2.0 is just absolutely perfect, because it's really simple to integrate. Basically, you drop this extra sidecar binary with your Prometheus server. And it creates a mesh cluster that allows you to single query interface to query all of your production Prometheus servers all through one interface, which is just way, way, way easier. So this is kind of part of the original idea of Prometheus was we wanted to make it really easy to get into, but there are complex problems for bigger organizations that can't be solved with a single Prometheus server. So as GitLab has grown, we went from having one big Prometheus server set up to now we have one just for the Rails app, and then we have one just for each of the runner clusters. Each of the runner clusters has its own Prometheus server. And then we're like, well, the database stuff is getting too big. So let's move the database metrics to their own Prometheus server. And oh, there's now one in the GCE install, and that's separate from the one in the Azure install. So we need to like, we want to time all together. So really, all we need to do is install TANOS. And then having to figure out which server to pick is no longer a problem, because you just query TANOS, and TANOS routes the query to the right Prometheus server. And then also like, as we've been growing, we keep having to increase the size of the disk storage on each of the Prometheus servers because one T disk was too small, now a two T disk is too small, now a three T disk is being OK. But like, why can't we just store all that data somewhere else? Why does it have to be on the Prometheus server? Well, the TANOS side card can ship the extra storage that is like months old, and it can ship that into a bucket storage, like S3 or Google Cloud Storage or whatever you happen to have. So it's like, it's really cool. Like, it can use the offsets, you don't have to like, go back and outload everything. Yeah, and it still uses each of the individual Prometheus servers to send all the alerts. So that, you know, the Thanos it might be broken or down or, you know, because it's clustered, so it gets complicated quick. And like, clustered software is always really complicated. But the individual Prometheus servers are still able to send all their alerts. And they don't actually depend on the Thanos server. That's only for the dashboards. And like, analytics and things. So like, the important thing, the important component is still running and operating. And the extra component is extra. So it's like, it's a really good design. And it kind of mirrors a little bit about like the idea that Prometheus should work when everything else is on fire. So I'm the easiest thing for someone would have to be like, oh, all this local data will make some massive thing here. And it's going to be HA and everything. And that's like the common pitfall. And this seems like it's kind of, it can deal with all kinds of things falling over. Yeah, and like the original long-term storage Prometheus is called Cortex. And it's based on having like a big table or a Cassandra cluster behind it. And it's much more complicated to run. And it's a little more, I wouldn't say it's more fragile, but it's more work to do. And it requires a much more persistent connection to the individual Prometheus service because it uses the remote write pattern. But with the Thanos sidecar, it takes the time series blocks and it just r-syncs them up to your storage, basically. So it doesn't need to be online all the time. So you can actually have your Thanos go down and then come back up. And like not lose anything. Yeah, and you just look at what the latest blocks for that were written down and you upload those. Yep, but it's really new. So we're still getting used to it. And like making like, there hasn't actually been any official releases of it. So it's just like compile from master and away you go. So it's still a little experimental. So we're still working on, like the people that are working on the Thanos software are still working on it. I think this is the one? It's the one for an internal organization's use. Cortex is actually still really awesome if you're trying to create like a hosted Prometheus service. So like if you've got a bunch of, like so right now the Thanos requires like bi-directional communication with each of the Prometheus servers, which doesn't work so great if you have like a very widely distributed NAPS, bunch of Prometheus servers that are out behind firewalls and stuff where you need to ship that data home to a central servers, Cortex is still really good for that. So like if you're an organization with like random Prometheus servers installed on random customer locations, Cortex might still be good for that. And it's like, if say GitLab wanted to have hosted Prometheus, we would probably want to run a Cortex cluster as like call home telemetry for gitlab.com installations. Cause that's something that we've talked about previously. So there's still a use case for doing Cortex. And I think that the Cortex code is a little bit more designed to do like customer to customer separation. Yeah, it's more multi-tenant. There's no multi-tenancy in it. Yeah. I think we're just going to go with like, first like get the GitLab cloud native home charts. Prometheus is a part of that. And then start doing this on top. And I think the multi-tenancy solution will have to wait a while. Yeah. But if you want, if you're gitlab.com and you want all of your Prometheus servers to look like one big Prometheus server, then Thanos is the way to go. It's like, it's different use cases. Well, thanks for the context, man. Yeah. This was Sid, I'm the CEO of Gitlab. And Ben, Ben, you wanna? Yeah, I'm Ben. I'm the Prometheus, and actually not Prometheus, but the monitoring team development lead. Awesome. Thanks, man. Yep. Great day. Bye. Thank you. Thank you. Thank you. Thank you. I can't be out here.