 Hi, everyone. Welcome to this new HTML functional update. Let's get started. So I'm Remy, and I'm the HTML lead. And today we'll go through some achievements through three OKLs and questions. So first achievement, Jenshin joins the team. So Jenshin was in the CI CD team. He joined the Edge team. And he will help us crush our Q3 OKLs. So welcome, Jenshin. He has already contributed a few awesome things to the Edge team. So I'm really excited about that. Second achievement, so we made the pipelines and specifically the feature tests faster by basically bypassing the login procedure. So just to explain briefly, every feature test basically starts with, almost all of them basically start with signing a user and then performing some actions in the website. And the thing is actually go to the same page, filling the form, and submitting the form is taking a few seconds in each feature test, a few seconds. And by just bypassing this step, which is already tested separately, we can save a few seconds for each test. And hopefully it will reduce the duration of the pipelines. So as you can see in the graph, it's not obvious because our runners are affected by loud neighbors. So we don't see a big drop. But as you can see for yourself, the pipelines now run more on the average of 50 minutes, where they were running more like in 75 minutes previously. So thanks, Robert, for finding this and for changing the implementation. And in this subject, we also merged 27 test-related measures in the 9.4 night zone. So yeah, the next achievement is kind of related. So we are replacing Spinac with RSpec. Spinac was a behavioral driven development library, testing library, and we just replaced it with RSpec. And it allows us to reduce the prioritization of Spinac jobs on the CI and to increase the RSpec ones. So I think that also helps a bit to have faster pipelines. And yeah, thanks Blackstone and Alexander Randa from the community for the migration from Spinac to RSpec. We also enable a few more Robocop cups. So this is to have more static analysis checks. Next achievement is about community merge requests. So we merged 64 community merge requests in 9.4. As you can see, we did better, but we also did worse. So that's still pretty good, I think. And yeah, our community is very active. For example, for the internationalization of GitLab, we currently have a lot of magic for that, and that's awesome. Next achievement is about GitLab QA. So if you don't know how GitLab QA works, I encourage you to check it out with the first link. It's one thing I wrote because I was having a hard time understanding how it works. So feel free to read it and improve it. And yeah, the actual achievement is that you can now run QA against any commit, so any CE or EE magic requests, basically. And this was a team effort. So thanks, Bala Sankar and Gregorz, especially for this. And yeah, the next slide shows you actually how to do that. So maybe you've already seen that you have a new build package for a few weeks. And now the build package. So if you play it, it will actually trigger a pipeline in Omnibus. And what this pipeline does is it will build a new package and then build a Docker image for it from it and then actually run the QA scenarios against it. So it currently takes some time to run. But the build team wants to speed up the specific triggered jobs. So I'm looking forward to that. Next, Q3 OKRs. So for the Q3, we'll basically focus on three areas. So the first one is productivity of the GitLab developers in general, so from the team and from the community. So the first key result will be to have the pipelines running in 30 minutes. They are currently running in 50 minutes, as I said before. So we'll need to see if we can find some quick wins like the bypassing of the sign-in or if it will result to just throwing more hardware at it. We'll see. The next one is to detect and retry flaky tests in a smart way so that we don't have a broken master. Like today, we often have red master commits. And we plan to detect the flaky tests and kind of ignore them or retry them in a smart way and ultimately fix them, of course. The next one is so we'll be to try to merge TE into EE daily, which is always done, but it's done manually today. So the plan is to do it automatically using schedule pipelines and also to refactor the code base in a way that we avoid conflicts beforehand. Otherwise, the automatic merges will still require a lot of human labor, so that will be pointless. And the last point in the productivity area will be to be able to run GDK on Kubernetes using Minitube to abstract away the GDK installation and setup. Next area will be about quality. So it will be about a piece of QA and basically test a few critical components like backup restore, LDAP integration, container registry, and Matamos because we often, not often, but from time to time, we have some bad operations about these features. So it's good to test them at a high level and ensure that they still work as intended. The next point is about triage policies. We'll try to enforce them automatically and to keep our wish track of sane and under control because it's growing fast. So we'll see what we can do, but we'll use a schedule pipeline and so forth. And the last area of focus will be about performance. The first point is to ship a script, seeder to seed basically a lot of records in the database for developers to be able to experience performance problems locally. So it will not be as a prediction of our own, of course, but it will already replace the problems that can happen with a lot of records. And then the last point is enabling bullet by default on the CI. So bullet is a tool to detect and avoid N plus 1 primary problems. And the plan is to enable it on the CI to detect the issues and to avoid new issues. And that's it for the HTML3 OKRs. If you have questions like, what's the HTML? Where can I find statistics about our test suite? And if you have more questions, I will take a look at the chat. What's the average Merge Community MergeQuest? I'm not sure I get it. What's the average quote of Merge MergeQuest, maybe? Yeah. So the average community MergeQuest that we get per version? Yeah. So if you take a look at the graph again, I will share it again. If you take a look at the graph. In average, I would say it's around 60. In average, you can see that for 9.0, we had more than 130, probably. But most of the time, it's more like 60. So it also depends on the load on reviewers, of course, and maintenance. So it's not always the same. But I think we are doing a pretty good job. So I will try to, I don't know, I cannot see that. Yeah, OK. Chat. How large do I need an extra try for the DB? What was the context for this question? For the large production-like test seater. OK. How large? How large? Probably like, I mean, it depends. You will be able to limit the number of records created. But I think a good number is like one million projects. I think that would be a good goal. So I'm not sure how large it means on the five system. But I think that's OK. And yes, there was the troll emoji, so OK. Yeah, there's no other questions. Thanks for listening. And I will give you back 16 minutes of your time. And see you in the team call. Bye-bye, have a nice day.