 Okay, hi everyone and welcome to today's infrastructure group update. My name is Andrew. Cool. First of all, a big welcome to Andreas who joined on Monday as a senior database engineer. Now let's move on to the overarching goal of the infrastructure group. So what we expect is when we deliver the GCP migration project, this will be a major milestone in making GitLab.com ready for mission critical tasks. So let's discuss the GCP migration in a little bit more detail. The plan for the GCP migration had previously been to migrate directly from omnibus running in Azure to Kubernetes in the Google Cloud platform. However, due to some factors external to the project, we found that we needed to complete the GCP into Cloud migration earlier than we had planned. This meant that there were changes that we needed to make in GitLab to make it run in Kubernetes that could not be done within this timeframe. And so the solution was to break the project down into two stages. The first stage of the project is basically we'll keep the same technology stack as we've got now. The production team will have set up a new environment in GCP, much like the existing one that we have in Azure. And we are working to basically get the geotransfer started. And it's basically we're going to shift everything across. In the second stage of the project, we'll stand up a Kubernetes cluster alongside the omnibus version of GitLab.com that we'll already have running in GCP. And then we'll use cloud native home charts that the build team are currently working on to provision GitLab.com workers in Kubernetes. These workers will share the same Postgres and Redis instances as the omnibus version that's running in on GitLab.com. Then we'll gradually shift traffic from the old cluster to the new one. This isn't going to happen overnight and we'll increase the volume and watch for errors and then increase it some more. And if we experience any critical issues, we can always send the traffic back to the omnibus instance, which will be running alongside it. The amount of time that it takes, that this takes is going to be dictated by the number of errors that we find, how quickly we can fix those errors, and then just as importantly, how quickly we can release those fixes back into the cluster. So I think what I'd like to emphasize is this, while we've rearranged some of the deliverables of the project, the goals of the project remain exactly the same as before. So the first goal is make GitLab.com suitable for mission-critical workloads, which we've discussed. The second one is to migrate GitLab.com from Azure to GCP. And the third goal is to provision GitLab.com on Kubernetes using the home charts that the build team are working on. So, oops, there we go. So probably the most important part of the first stage of the GCP migration is the geo-replication. And this, John Jarvis and the production team are working to get this started as soon as possible. Right now, we're experiencing some problems with replication on Postgres. And so I've linked to two issues over there. Anyone that can help out would be greatly appreciated. We hope to have the geo-replication started in the next few days. Sorry, the Postgres replication. So this is just sort of an overview Gantt chart of what the high-level project plan is. I think I want to stress very much at this point that the dates on the second stage are still very much tentative. The reason I've included them is to illustrate the dependencies between all the projects that are involved in this GCP migration and not the time frames. So what you can see here is how everything is sort of stuck together and how the dependencies work. And those are unlikely to change, even if the time frames do. But this is sort of the gradual sort of shift that we'll have across to everything running in Kubernetes. Cool. Moving on to some production updates. As some of you may know, over the New Year's period, we had some unexpected downtime resulting from the meltdown inspector security flaws that were announced around about that time. And this is just a little time frame of what happened. On the 28th of December, Azure notified us that they would be performing force maintenance, basically rebooting our servers. And they'll be doing that in around about mid-January. So the same day, the production team put a plan in place and they had to work around ready to basically deal with the problem with minimal customer impact. Unfortunately, a few days later, the security flaw was leaked to the press, and this led to Azure accelerating their plans a week before expected. And they gave us notice on the 4th of January that they would be doing force maintenance immediately. And then soon after this, they rebooted our production Postgres instance. And this led to 27 minutes of downtime. I just want to shout out to the production team because I worked really hard to deal with this unplanned event and put in a lot of effort. And particularly, Daniela and Jason put in a huge amount of effort to mitigate this. So extra big thank you to them. Some database updates. So yesterday, Yorick instated a timeout on Postgres statements. Currently, or previously, the maximum statement time was about 30 seconds. And now it's down to 15 seconds. And there was a lot of concern about whether this was going to be a disruptive change. And it turns out that it wasn't very disruptive at all. I think the thing I want to point out here is that if you develop a programming code and you're worried about hitting 15-second timeouts in Postgres, you should be thinking a lot harder about how to optimize that code. Because keep in mind, we're trying to get 99.9% of all user requests to return within one second. So 15 seconds is still way over that. Another database update. This is something that Greg Stark is working on. When it's finished, it'll be super useful, particularly for back end developers, to help them get an idea of how the sequel that they're developing is performing in production. So it's a dashboard that lists all the major sequel queries that are going into Postgres. And it gives you call rates. It gives you times. It gives you all sorts of really useful information. So if you work with the database, I recommend keeping an eye on that project that Greg is working on. On to Giddily. We had some major problems with Giddily at the end of last year. We had some serious scaling problems. The production team had resorted to continually restarting the service. And thanks to some really good investigative work by the Giddily team, particularly Jakob and Ahmat, managed to isolate the calls. And we found that it was a recent fix in the Go programming language. So we recompiled Giddily with Go 1.9. And we saw some dramatic speed ups. And this is really good because it means that Giddily probably has the headroom to scale vertically to the point at which all Git operations on gitlab.com will run through it easily. So a little bit more on Giddily. And I've discovered that my mouse is super sensitive in presentation mode. So basically, we're well on track for Giddily 1.0 to be delivered in late February or early March. Giddily 1.0 means that all the Git and Giddily endpoints will be enabled on gitlab.com. And customers will also be able to opt into all the endpoints, but they won't be on by default. And at this point, we'll be able to run Git, web, API, and sidekick workers on gitlab.com without any NFS mounts underneath them. So it'll just basically all Git traffic will go to Giddily. What we'll probably do those well is we're probably not going to decommission the NFS infrastructure on Azure. We will do it on GCP, but there's no point in spending extra time on doing it on both. Also, just another thing on Giddily. Giddily is an upstream dependency of the Cloud Native project. So in order for that project to be successful, Giddily needs to be delivered. And to this end, the Giddily team have been working really closely with DJ and the Cloud Native build team. And making sure that all the blockers that they have are being resolved. And the Giddily team have done a really good job at solving all of those. I think they've done about 12 this month so far, which is fantastic. And I've got some links in there if anyone's interested in that. And finally, we are hiring. So if you know any good people who could fit into any of these roles, please refer them to the job. And that is the status of the functional group update. Does anyone have any questions? Let me just, I'm going to just go to my chats. I don't have any questions, but I want to say that I really love the presentation, the format. It's really engaging. And I love all the work you've done, Andrew. Thanks a lot. This looks great. Thanks, Sid. Where is my chat? Cool. I don't think there's any questions then. So if that's it, you can get 20 minutes back before the team call. Thanks, everyone.