 All right, it's the top of the hour. Welcome to the monitoring functional update. My name's Ben Kochi, I'm the team lead and we have several other engineers on team. We're hiring, we're looking for more backend engineers for the monitoring team, exciting stuff coming in that we wanna build, but we need more people to do it. So what are we doing? We're currently, we've been shipping metrics of Prometheus and shipping Prometheus as a feature for GitLab users, but soon we're gonna be trying to add distributed tracing and also distributed logging support so that when your app's running in production, you'll be able to get the full range of instrumentation that you need as an application developer when you're using GitLab as your development platform. Tracing gives you really, really interesting things. Basically, you can follow the workflow of your code from application request to application request and get all this individual work or get the individual data flow for each request into GitLab. Logging will give you something basically the same, but it'll give you your application logs as viewed inside GitLab. The Prometheus Ruby client is part of our metrics feature. Basically, we've been working on building a much better Ruby Prometheus client library and this is what we use inside GitLab itself to instrument our own code. This took quite a while to debug and build, but we've been running with this in gitlab.com production for about a month now and we'll be releasing this as a general availability in 10.7. We released a bunch of great stuff in 10.6. We wanted the ability for users to be, to generate their own Prometheus queries and have those visible within GitLab. We also shipped a cluster monitoring. So if you're running a Kubernetes cluster connected to GitLab, you'll be able to see some basic metrics about the health of your Kubernetes cluster and be able to diagnose the utilization of your Kubernetes cluster. In 10.7, we got some cool stuff coming. Basically, the idea is we want users to be able to act on their metrics. So this is about taking the metrics that you are getting into Prometheus and expressing those as SLOs or SLAs, depending on the style of monitoring you wanna do and say things like I have a certain error threshold that I wanna maintain, whether I want no more than 1% or 0.1% errors or aesthetic rates, like I only want no more than 10 errors per second or 150 errors per second. And if that threshold is exceeded, we'll be able to send an alert to an operator and get that to the user so they can go look at their code and go look at their production environment to find out why they're generating errors. This is the next step in being able to not just look and observe what you're doing in production, but also automatically react to what's going on in your production environment. Other good things we're working on, we're gonna be instrumenting the GitLab shell. So if you're running your GitLab instance and your users are seeing errors on the SSH interface, you should be able to correlate that with metrics. And then eventually you'll be able to alert on whether your GitLab shell is functioning properly or not. So if you're using Git push or Git pull via SSH, you'll be able to see metrics for that. We're also going to be starting the prototyping of the distributed tracing. So we've been doing it, we've spent some time doing a discovery on how we're actually going to deal with distributed tracing. And our current prototype is going to be using Yeager, which is part of the same cloud native computing foundation that Kubernetes and Prometheus are part of. Prometheus in production on gitlab.com. We've been running with 2.0 for the default. It's been working quite well. We've been following along with the development of Prometheus 2.0 and are very happy with it. It reduced the load on our production Prometheus servers by a significant amount. We've also been able to expand and we've added a second set of Prometheus servers to handle just the application metrics. So part of the great thing with Prometheus is it allows you to vertically scale your Prometheus by separating out different aspects of your application, but still allow you to see all of your metrics globally using Grafana and also alert globally by using the alert manager. We've also started to work on improving the response to errors for gitlab.com by codifying our company SLAs into alerts. So we're currently working through all of the different things that are important to gitlab.com users and building better alerts so that we can be alerted more quickly when the performance of gitlab.com is not doing as well as it should be. And that's all I have for presentation points. Let's go to questions.