 to the March 22nd, Prometheus functional update. I'm Ben, I'm the Prometheus lead, and we also have Joshua Lambert, our product manager on the team, and Kevin Lydett, and Julius Volts, our engineers on the team. So, we had a great 9.0. We delivered the first iteration of Prometheus in 9.0. We included Prometheus on by default and a bunch of exporters so we can get data out of various components. And we also integrated some basic functionality to allow users to see Prometheus metrics for apps deployed to Kubernetes. That was totally great. We got a lot of stuff done. So we got some simple project integration so that you can select Prometheus server. And some basic metrics. Nothing super fancy, but it's a really great functional start to integrating Prometheus in 9.0. So, next up, what's coming in 9.1? We're going to improve the amount of data that we give back to the user by putting graphs on the merge requests so that you can see before and after picture of a merge request. And then additionally, in the code itself, we've been working on implementing a Ruby Unicorn improvement to the Prometheus Ruby library and we're hoping to be able to get that integrated so that we can get direct metrics out of the GitLab install without having to go through InfluxDB or other metric systems for push metrics. And then we're also going to be improving the user experience for metrics pages. Another nice thing that we're going to do for 9.1 is we're going to have a better onboarding experience. So when users haven't configured Prometheus, they'll be directed to be able to enable it more easily and with a nice interactive way instead of having to do a lot of manual configuration changes. And here's what we have for the metrics pages that we're going to be improving on. And more longer term, we're going to be doing more auto deployment of Prometheus and better integrate with the deploy board. We've also been spending some time working on Prometheus in gitlab.com production. Also get hosted. We've made a number of improvements to the dashboards. We may improve the alerts and we upgraded the Prometheus servers in production but we still have a few performance things that we want to fix in the Prometheus server itself. Although I think we just found one of the problems there this afternoon and then we also want to improve the instrumentation across the board and really get good metrics out of gitlab.com. And that's pretty much it. This was a very simple quick functional update. Let's go to questions. Anything in the chat, where's the chat window? Hey Ben, thanks for the update and I'm so excited about all the progress in the blog post for 9.0. We talk about environment monitoring but we don't talk about application monitoring. Is that because we already had that but now it's enabled by default or why is that? No, so it, I guess, so one of the things that we're going to be adding I believe is we're going to be adding the ability to add custom queries to the interface that users get, they'll be given the environment monitoring by default for CPU and memory and things like that but we're going to be adding a custom query interface so that users can get application monitoring. Okay, that's pretty cool. Should we add that to the blog post or is that there lower? That's a good question. I don't have the answer to that. Yeah, so Sid, this is Josh. We can definitely add it. The reason why we titled it environment monitoring was just because right now it monitors your, sorry, your CICD environments. Thought that was a relatively apt title but we can change it to application monitoring if that's more clear for sure. Yeah, let's take this maybe offline. Let's, I'll give you a call after this call. Any other questions? Somebody asked about CE versus EE. We're mostly focusing on CE stuff right now and Jim says, I'm wondering why the, okay, hold on. Jim, do you want to talk about this a little more? I'm not sure what the question is there. No, no, no question. I just wasn't sure what Nealsoft was, if anybody was talking to them. So I'll let you know what they say. They signed up for an account yesterday. Okay. Why are the graphs are on the MR given that this might be a lot of data? I guess I'm not sure, I guess I'm not sure what the issue is there. The graphs are, I believe are a separate tab on the merger quest. So it's not in the way as far as I know. Mike, what do you want to know more about Kubernetes? I don't know, you just mentioned it pretty quickly. I mean, I barely heard what you said about it. So I just wondering. Basically, in order to get the per project metrics, it's quite complicated to integrate with random different deploys of Prometheus metrics data. So we focused on integrating with the Kubernetes style deployment from CI. So projects go into a merger quest, go into CI, go in and are deployed on Kubernetes. And Kubernetes provides really good per project metrics. So it's easy to pick out which it's really easy to craft the query that we need to get from Prometheus to get the data about a specific project from Kubernetes. And that's why we're focusing on Kubernetes deployments right now. Awesome, thanks. Any other questions going once, going twice? This is John May, I have one quick question. Sure. And the application or environment monitoring, can you do that on pre-production staging systems so that we work with a lot of the large companies and what they often do is they'll create all their builds and everything, but they'll build it all to a staging point before they decide to do a bigger release. So it's something that I could see them wanting to be able to see the impact before they go to production. That is totally possible. The user, if I believe we've already integrated an environment tag in the user interface, so that when they're using Kubernetes if they deploy to Kubernetes, we already extract an environment tag from the metrics. What if they use, they have internal systems. So I used to work with Macy's and they had 1600 servers already in place for testing. So they may wanna monitor all those 1600, is that possible? Prometheus can totally do that, but it's not really on our target to provide that because they're running their own custom deployment. Eventually, one of the options will be if they're running their own Prometheus setup, you'll be able to select which Prometheus, you can already select which Prometheus server you wanna query from. So if they wanna integrate their Prometheus in production and staging and wherever they run Prometheus, that's already possible to integrate. It's not, I wouldn't say it's great yet, but it's totally possible. Sure, so I brought it up to Joe a little bit and put an issue up there, a request or a happy, but that's a huge problem for large companies is we have these teams and they wanna test, but they don't even know what servers are available and when and if somebody's already on that system. So sort of self-select those things would be a huge boon to them. And they were, as working in a different company and they were paying a lot of money to have that capability available to them. Yeah, that sounds like they need Kubernetes. Yeah, it sounds like review apps is the solution there, right? Well, they have a lot of legacy apps and they wanted to manage, if I've got 1600 servers, I have different teams wanting to test at different times in that environment. And so they're always changing those environments, but those are hard servers or VMs, right? Yeah. It's just spinning up something they're just running under from VMs that they have, but they have a lot of them and then they never know who's in what system at what time. Right. It's a big problem for them. Yeah, I would, I have, this is totally unrelated to GitLab deployment. But I have several recommendations for things that they can do. But of course, that's all requires possibly changing and adjusting their infrastructure is things that I call, there's a couple of different projects. One of this is called Metal is a Service, M-A-A-S by Canonical, the Ubuntu team. And it's basically an API service that turns your bare metal into a cloud provider. And then of course, I highly recommend Kubernetes because then they can just take those servers that they already have and just deploy Kubernetes to it and then they get all the dynamic things. Yes, M-A-S that I owe is super great. Or actually, I don't actually know, I've not tried it. I saw a nice demo by the Canonical people. It looked pretty good. I've used similar systems like Tumblr released one in open source called Collins. But that's a lot of plumbing for a developer team to add depending on how much control of the data sooner they have. Yeah, so John, what they have right now is that they have more changes than the staging environments basically. So that's why they continually do this, I think it's called musical chairs where people have to move around. And what they really need is they need Kubernetes for that. Well, what they're doing is they're literally keeping it on a spreadsheet or they're sitting down and having two hour meetings with teams every two weeks to figure out who needs what and when so they can make sure that people don't bump into each other and have someone already testing on something, some kinds of big change. Yeah, I know all about that kind of thing. Yeah, and the thing is you can do two things. You can play the musical chairs better, but in the end, that's you're still playing musical chairs and everyone's distracted. The other thing is just to get more chairs. But what they don't wanna do is buy more servers because they're expensive. So what you do is you reuse the chairs. So you have one chair and then three people can sit on them. And that's what Kubernetes and review apps in GitLab allow to use one server, have three people sit on it. And if you're not, no one is sitting on the chair at the same time. The problem right now is every chair has a name on it. Now it will have three names and there could be three people sitting. That's what containers allow. I'm stretching this metaphor too far. The idea is that every change gets its own dedicated app and they should leverage Kubernetes and GitLab review apps for that. Okay, because there are companies paying hundreds of thousands of dollars for this capability. Yep. Yeah, and I think the question is, should we build a feature for sort of legacy applications that might be cost prohibitive to move to the Kubernetes model or just wait until they move far enough and long to base these things? And this is why for that I would recommend something like math.io or... And I can probably take this offline with you but I think my goal in this was that if we can show that or make that process easier and have these teams then drive whatever they need and then pick it directly from GitLab, that would be a really efficient system for them because then we'd already be able to see they have to go in at this system and that system because we already know it's there and available and nobody else scheduled into it. Sort of pick whatever is available and move it into it and go on. But that's still the model of making the musical chairs go more efficient and the GitLab direction is just adding more or making sure that people can share the chairs. So everyone can have a seat. And they don't have to like be architect their app, almost every app you can put in a container and deploy. It's a bit ugly but it works. Yep, yeah, the way containers, you can build a container that is one file or a thousand files or an entire operating system in a container. So it seems daunting at first but it's much better once you have that model. Okay, great. There's some options there that I may wanna explore with you. Yeah, I would take it up with Josh. Josh, are you okay with that? Yep, of course. Wonderful, thank you guys. Thanks, yep. All right, anything else? All right, thanks everybody. Thanks. Thanks.