 Let's get going then. So functional group update for the infrastructure team. I cannot see the chat window, by the way, so I'll have to come to the questions later on. Topics I want to cover today is the overarching goals for the infrastructure team, accomplishments, things we're working on, and then concerns. And I've actually combined the concerns and the slash need help. So first, the overarching goal for the infrastructure team is to make GitLab.com ready for mission critical tasks. So users of GitLab.com and customers on GitLab.com should come to trust GitLab.com to the extent that it is mission critical. They can use it. They can always rely on it. So this won't happen overnight. And for the next couple of quarters, we'll be focusing on these three goals over here. And they are in the OKRs for the team at large. Other groups can also contribute to these OKRs. But certainly, infrastructure will be working on these goals. One is to raise the availability of GitLab.com to 99.9% and keep it there. We've been hovering around 99.7, 99.76. It becomes increasingly difficult exponentially so, as you add more decimal places there. 99% of user web requests should be served in less than one second. Currently, it's hovering around 2, 2.5 seconds. And the top 10 actions from the risk assessment should be completed. The risk assessment is linked there. All right, so what have we done since the last time we gave a functional group update, which was about six weeks ago? First of all, Ilya started as a production engineer. And we also have two additional senior production engineers joining very soon, June 15 and August 1. Names to be revealed on the next functional group updates. In the security team, speaking on behalf of Brian here, I linked to the meta issue where I'm keeping track of the current top 10 security actions. And 4.5 out of 10 have been completed. A couple of them are going to take a long time. They involve large features being developed. And others actually have some potential for quick wins. So I encourage people to take a look at that meta issue and contribute if you can. In the last six weeks, Brian tested teleport and decided against using it. The issue is linked there for access management. So now we're going the way of setting up a VPN network instead. In the database team, Jorik has been very busy. PG Bouncer was shipped in 9.2. Nested groups were reworked. These were Jorik's words. But Stan added the little footnote there. It's more than just reworking. He rewrote how we calculate project authorizations and made them fast and consistent. So that was great work by Jorik. Support for nested groups in my SQL was dropped. I should probably change the wording of this slide before we send it out to the rest of the world. I've already changed it, Ernst. OK, thank you, Sid. And just FYI, storing serialized data in the databases no longer allowed. That's just one of three slides of accomplishments in production, a lot of different things. I've linked to each of them. I think the ones that I'm personally most excited about is the fact that we have complete terra formation of the front end fleet and staging, which enabled the team to very quickly make canary.gila.com within a few hours, actually, which is a small step in the direction of a larger metagall of having multiple canary deployments run through Kubernetes. There's important work going on on, of course, backups and resiliency. So disaster recovery for the package server, snapshotting on the file system. We've introduced a new workflow within the production engineering team to help prioritize and coordinate work using issue boards and milestones, but adapting it to a bit of a Kanban style. And we've also introduced a change management process to make changes in production. The idea of this set of guidelines on how to make changes in production and a checklist to help you when you're making a change in production is that it becomes really low friction, really easy to make a change in production. And the Giddly team used it successfully to introduce changes with review by production engineers, but no intervention by production engineers. Of course, it helps that the Giddly team has production expertise in their team, but still. I think this is an awesome move in the right direction. And the final slide becomes Giddly. Oh, what happened to the plot? I see it was moved to the next slide. So Andrew, actually, I'd like to invite you to speak to the test that you did with Giddly running on the NFS servers. If you're here. Yeah, I'm here. Yeah, so basically what we did was we're quite interested to see what would happen if we kind of cut NFS out of the equation. And so what we did was we installed Giddly on our NFS servers. And then we reconfigured one of the Gidd workers to use to basically go directly to Giddly on the NFS servers in order to fetch some data. And we ran that test for about half an hour and compared the times on that Gidd worker versus the rest of the fleet to see what the differences were. And so this first slide, what you can see is that for the median value, they're pretty close. So it's 0.12 seconds versus 0.14 seconds for your median request between NFS and not NFS. But then where it becomes more interesting is when we take the 99%ile value, you can see there that the red one, which is the network Giddly worker, that's basically 99% of all requests came in within 0.7 seconds. And the black bar is NFS Giddly workers, and that's 1.9 seconds. So what we saw is our P99 values dropped drastically. And I've just got one more graph, Ernst, if you could click forward. And that's just another graph that shows latency over time during our test. So it's a bit small, but the blue is the average time that a Giddly call took. On the left-hand side, that is our test host. And on the right-hand side, that's the control host, which is the rest of the fleet. And then again, the green line is the 99%ile graph. And you can see those are to scale. You can see that the left-hand side is obviously vastly better than the right-hand side in terms of speed and performance. So those tests were quite a success. There is more work to be done with CPU-bound Gits endpoints, but we'll get there. That'll be the next set of tests that we do. And that's really Ernst. Thank you, Andrew. Cool. All right, so what are we working on? A whole host of things. One that's top of mind for me specifically is performance generally, and making sure that we have it mapped out in terms of what exactly happens when you make a web request and what are we measuring and are we working on the right things. There's been a lot of efforts in this direction already, so a retreading some of that and fleshing it out in more detail. A couple of things highlighted here for the individual teams. Production, of course, working on preventing Redis outages after having applied a Band-Aid. Continuing work towards those canary deployments with Kubernetes that I referred to earlier. There's a lot of work on the database side. There's a lot of slow queries, and there's an overview issue there. As you're probably aware, we're working with an outside consultancy to help us get better performance from our database. So there's a lot of tasks that have flowed from that. And Jodak wanted to make sure that everybody knows that polymorphic associations will soon no longer be allowed. On the security side of things, as I mentioned earlier, creating a VPN. And if you're interested in additional topics, please visit the meta issue and go from there or visit the risk assessment sheet itself and go from there. And the Githly team will continue working on migrations, more metrics, more logging. That's going to be an ongoing effort for quite a while. So what are the things that we're concerned about or that we need help on? We need more people. So there's a lot of work to be done on the database side. We're hiring database specialists, security specialists. We have positions open for both director of infrastructure to replace me and director of security. So please send along your candidates. Send along great people with great records, great work records to join our team. In the focus on performance, it's a concern to me in the sense that it's going to lead to a number of follow on issues and ideas of things to be fixed that go far beyond the infrastructure team. So there's not a specific ask there yet, but it's a concern. You have a specific concern right now about adding more NFS servers, increasing the odds of an outage. We just added for Gith NFS servers. Each one of them has a specific SLA. An outage of any one of them causes an outage of all of gitlab.com. So that's a concern. And I've created an issue there for a graceful degradation of the service. Githli is concerned about migrating RALS that are in the process of being deprecated. This happened very recently. And it essentially nullified a lot of work by the Githli team. So Andrew and team is thinking about better ways of finding out which RALS are at risk of being deprecated in order to not work on those. And then there's also problems with GRPC that continue to affect the team. So with that, I will escape from presentation mode and see if there are questions in the chat.