 Is this working, maybe? Okay, welcome to this month's Prometheus team functional update. I'm Ben, I'm the team lead. Joshua Lambert is on our team as the product manager. We have Paolo, Kevin and Nulius as engineers except Paolo's new full-time working on Prometheus now. So we're very excited to have him doing all of our, not all, but doing much more back-end engineering. So let's get on to 9.2. We have some cool features coming in 9.2. We're gonna be displaying performance impact on the merge request. Sort of, maybe, we're still very blocked on this merge request widget redesign work and we've been actually blocked on this since 9.1. So we're really hoping that this merge request widget redesign work can finally be merged and then we can get on with implementing our features. We also have some cool deploy history on performance charts. We'll be pulling data from the continuous integration environments and displaying those on the performance charts. We're also going to be adding a few new things. We have an experimental Ruby unicorn exporter that will allow us to directly integrate Ruby metrics from the GitLab server components into Prometheus. This will eliminate the need for a lot of third-party integration like exporter stuff that you would normally need for a Ruby app and other crazy hacks that you would normally need for a Ruby app. This has been a great improvement to the default Prometheus Ruby library. Hopefully, once we get this stabilized, we're gonna upstream our work and make that part of the official Prometheus Ruby library. We've also updated some of the components with the normal upstream release cycles. Prometheus 161 includes better memory usage. We've updated the GitLab monitor to get all the latest features it has and keeping up to date with all the other exporters, minor bug fixes and small features. We've also done some improvements to the configuration to make it easier for users who are using things like external database configurations and other related bugs when using Prometheus in different ways. In 9.3, we have a whole bunch of new stuff that we wanna get in. We wanna honor the text known metrics for common services like HTTP requests and other things like that. We wanna support a much broader range of metrics that instead of just the default CPU and memory allocation metrics, we wanna improve the Ruby Unicorn library and get that more integrated into the app and get some really good metrics out of the main Rails code base. We're actually looking for other backend groups to help instrument their pieces of the code so that we can have better metrics with the Prometheus Ruby library. And then we have some stretch goals. Of course, we wanna try auto deploying Prometheus on the Kubernetes so that we can have single click integration with Kubernetes clusters in the idea to production style workflow. We also wanna try and improve the network connectivity to various components that we don't have to lock it away and give more users access who may be worried about the security implications of all the metrics endpoints. So here's an example of the common metrics, kind of a mockup. And then we'll have the install button on the integration page. We also some more work on the Prometheus in production. Not a ton here, but we did, we now have Prometheus server metamodering. We've improved the configuration so that the production gitlab.com Prometheus servers are monitoring each other so that individual Prometheus servers, so there are three Prometheus servers in the production environment and they all cross monitor each other so that we know that if the Prometheus servers themselves are having problems. We also have been improving some of the noise. We've done more improvements to alerts and have been trying to reduce some of the noise for the production team. And that's it. Thanks very much. Let's go to questions. Quiet, no questions. The Kubernetes auto deploy would be very useful when gitlab is not installed in Kubernetes. I guess I don't understand the question. I think the answer is yes. The answer is yes. Yes, oh, there's a tail attacking me. My question was because our idea to production demo already deploys Prometheus to Kubernetes. So I was wondering if this auto deploy feature was solely useful in a case where gitlab was not deployed in that manner, but your applications that you're deploying with gitlab are? Yes, that will be part of it. Plus what we're actually doing is there's a Prometheus server that is deployed in the Kubernetes to monitor gitlab. And this will actually be deploying a Prometheus server into your applications Kubernetes environment. So there will be a separate Prometheus server just for monitoring that specific app. This is part of the design goal of Prometheus is that making Prometheus really easy to deploy so that every app can have its own Prometheus server to monitor it instead of having to have your typical old school ops infrastructure where there's one giant monitoring server that rules them all. And with Prometheus, you don't need to do that. As far as I eat atomic memory, no, not really. The minimum footprint for a Prometheus server is about 100 megabytes. So it's not that terribly much and it scales with the size of your application. So the bigger your application, the more memory your Prometheus server is gonna need. And the part of the reason that we wanna do this is when you deploy many smaller Prometheus servers, they scale better with each of the applications. Also, if you decide to shut down an application, it's easier to turn off that Prometheus server since that application is gone and it doesn't affect the load of any other Prometheus server. So it's kind of an isolation thing. Yeah, and Drew, just to surround on your question. You're exactly right. So it's meant for helping to have a single click integration when GitLab itself is that running in Kubernetes. So for example, gitlab.com, if you're pushing to Kubernetes, you can have a single button to then turn on the enable Prometheus and we'll also have a secure network connectivity to and from Prometheus without you having to do anything. So it's really couldn't be any easier. Kim, does that answer your question about the memory usage? Cool, yes. But follow up to, is the graph reflection of how the MR has changed memory usage or just what the memory usage of the MR when deployed? It's supposed to actually show you before and after picture of the memory usage before you deploy an merge request and after you deploy the merge request. Oh, yes, and yeah, and as Joshua said on the chat, we added tracking of number of projects that Prometheus has enabled and this is added in 9.1. I don't know about preview apps. Joshua, do you know about that? Yeah, so review applications will work. Essentially what happens is it takes the 30 minutes before and the 30 minutes after of a deploy. And so if you have, for example, a commit going into a reification, or rather a merge request going into a, say like a feature branch that gets approved, that will then trigger a new deploy and you'll then get the merge request commit 30 minutes before and 30 minutes after the deploy happened if that helps to clarify. Okay, any other questions or anything I missed? Usagechartfor.com, I guess. Oh yeah, that's a good question. Do we have, actually I don't think we have, I don't think we actually have the PrometheusStuff4.com because we don't actually run any, we don't have any way to talk to the Prometheus server, various Prometheus servers yet. I don't think that's actually enabled on .com. Oh, sort of, okay. Maybe Kim, you can take this, take that question offline. All right, cool. Any other questions going once, going twice? See everybody in the team meeting.