 So, welcome to the monitoring function update. I'm Ben Kochi, I'm the team lead, and we'll get straight on to the presentation. So the monitoring team is focusing on various metrics projects we have at GitLab for various metrics features. We spent most of the last year developing Prometheus features, and coming soon in 2018, we'll be working on new tracing and open tracing related features and also logging features. So basically, you'll be able to take applications in a production environment, and with GitLab you'll be able to monitor them with Prometheus. You'll be able to collect traces using open tracing, and you'll also be able to collect logging and analyze your logs. Recently, we've been working on the Prometheus Ruby client. We deployed it to production, and it worked really great, except that the data that we are getting back was a little too big. So we spent some time over the last month working on cleaning up what we had, so we could more easily deploy it in production without overloading our Prometheus server. Basically, we ended up with about a million and a half different metrics coming out of the Rails application, and this was just a little bit too much to handle, given that the Prometheus server was already handling another million and a half metrics from other systems. We also had a small seekup corruption issue, which is now fixed and will be available in 10.5. One of the nice things that we've been able to do with Prometheus is Prometheus allows you to take and vertically isolate services. Basically, what you can do is we're going to be creating a separate Prometheus server just to collect the Ruby metrics from the main GitLab app. This way, we can keep expanding the amount of metrics that we get from different systems without overloading any individual Prometheus server. So we'll be setting up that soon. A little background behind the API. We've done some improvements to the API that developers can use to instrument things. Basically, GitLab administrators will now be able to use a feature flag to separate out metrics. This way, if you're only interested in a few things, we'll include those on by default, and then there'll be more advanced debugging features in the metrics available if you want them. Coming up in 10.5, we're going to be deploying a... We're going to have a basically automatic Prometheus deployment for jobs in CI and Kubernetes. This is going to be really great because it will reduce the barrier to entry for custom monitoring for projects that are developed using GitLab. Also coming in 10.6, we're going to be adding custom and business metrics. What this is going to allow you to do is if you have a custom metric defined in your application and the deployed Prometheus is scraping that, you'll be able to create your own queries and those will show up in your merge request views so that you can see if you have a specific feature that you want to make sure that is working correctly before and after deployment, you'll be able to get that instead of just the usual generic metrics like CPU and memory, and for example, shopping carts and other things. Prometheus 2.0 is now in production. It's been working quite well except for the when we overloaded. And that's all I had. Let's go to questions. Yes, the team lead is the band lead. I'm singing all about metrics and monitoring. Kim, I'm not sure I understand the question. Do you mean you'll be able to... Yeah, for the other projects on GitLab.com. So we already, yes, if you have a project on GitLab.com and you've configured a Kubernetes cluster, you'll be able to deploy Prometheus to that Kubernetes cluster and remotely access it. So that'll just work. Or at least it should. It is intended to work. FOSSTEM, actually, yeah. So one of the things that I did as the Prometheus team is FOSSTEM is a large open source conference. And we use Prometheus to monitor the conference. Let me see if I can pull up a real quick set of... I took some screenshots of that for demonstrating to the university that hosts FOSSTEM. And I have those somewhere. Let me see if I can get those out. Quickly. Why is the imager so slow? There we go. All right. Let's screen share that. So basically, at FOSSTEM, we bring a Prometheus server on site. And we monitor various things about mostly the video streaming at the conference and a little bit of the video streaming. So basically, you can see from the graph that we were closely monitoring the bandwidth available to the public streaming interface. So if people are aware of FOSSTEM as a large open source conference, there are 25 simultaneous streaming video streams of all the different rooms at the conference. And on the public front end, we have a lot of different streaming platforms. And on the public front end, we stream from the university to some rented virtual... rented physical machines at Scaleway. And we are doing about 400 megabits of bandwidth, three to 400 megabits of bandwidth for video streaming during the day. So it's quite a popular set of streams. We can also see that we monitor the review state. So as the videos are recorded, they're automatically uploaded into a transcoding pipeline that allows the person who gave the talk to actually do the cropping and editing of their video with a pretty nice little webinar phase. And then they can say, okay, my video is ready to go. And they can self-publish the videos. And then they get automatically transcoded, cropped, transcoded, cleaned up, and then shipped to YouTube for people to see all the talks at FOSSTEM. So it's a really nice automated way to get these talks out quickly. So already, I think two-thirds of the talks from FOSSTEM are up on the web. Usually for a conference of this complication, it used to take months. So now it's all nice and automated. We also have some nice stats from the Wi-Fi. So you can see on the left and the lower left and the lower right, a normal time during the day at the university, there's only about 1,500 people on the Wi-Fi. And then FOSSTEM comes around and we just completely blow away all the Wi-Fi stats and have many, many, many thousands of people on the Wi-Fi. And you can see that we also blow away all the ban on stats during the normal conference. And then I also have a nice little graph here which shows the top used Wi-Fi AP. So you can see during different times of the day, people are going to different talks and you can kind of see which talks are most popular based on how many people are in different parts of the Wi-Fi. Hopefully next year, I'm working to try and clean up some of this data so that we can actually try and get it down to buy buildings. We can actually see the amount of people moving around between different buildings. Tune, I hope that solves your curiosity for FOSSTEM monitoring. Any other questions? Going once, going twice? All right, thank you very much. See you in the team call.