 Okay, let's start. Hi everyone. I'm Raymond Coutab from the H team, the H&N team. Today I will update you about what we've done and what's coming for us. So first thing, the H team will now be part of the quality department. So one thing to mention is that the team is already actually working on improving the quality of different things like our development workflows, our code base, general and technical dates, and also issue tracker and community contributions. So it's actually makes sense to be in this new department that will be directed by BJ that joined us recently. And that also means that we will focus more on github QA in the coming quarter. So that's a good thing. And BJ will help us identify the most critical things to prioritize on and from a quality point of view. So I'm really excited about that. And I hope it will turn out great. So with that, regarding the community side, we merged 76 merge requests in the 10.1 release. So that's one more than 10.0. So that's great. We still have a lot of new magic requests every release. So that's awesome. And as usual, feel free to help to reaging them and reviewing them as you see them coming. Another cool thing in this magic request that we merged. So one thing really cool is to be able to define custom attributes for users thanks to Marcus Scholler. And he also implemented custom attributes for groups and projects that should be available in 10.2. And yeah, he also provided the documentation. So that's great. And a few other things more. So the first one about new creation services for SSH, GPG and deploy keys thanks to Haseeb. So this was a backstage change, but that's always welcome and the community is always good to work on that. So that's great. And the two other magic requests are about adding versions of GitLab components in the admin dashboard thanks to Travis Miller for the GitLab pages version and to Jacopo Bessie for the Gitali version. And last but not least of the magic requests that I highlighted here, of course, are the, oops, the one, sorry about that, the one that decrease the complexity of our code so that's a continuous effort by Maxim Ritkin for a few months now. And there are still more to come. It's still, we still want to decrease more the complexity of our code. So that's great. And also a continuous effort by Blackstone from the core team about migrating Spinaq to RSpec. And we think we can hope to get rid of Spinaq before the end of the year. So that would be awesome. About testing, the great news is that we are now using high CPU droplets for our runners thanks to Tomash from the CI CD team and thanks to the CI CD team in general for that. So we can see that the pipelines now run approximately two times faster. So that's really great. And also they are more stable because previously we had problems with loud neighbors like where our pipelines could take between one hour and two or more hours. So that was not very stable. So right now it's way better. I think the whole pipeline takes always takes less than an hour and the faster should be around 35 to 40 minutes. And we also had 33 test-related merge requests. So this also is a continuous effort from all the teams. So thanks everyone for having been there. The next topic is about performance. So we will focus, I will talk about that later but we'll focus on performance also for the last quarter. And Jenshin already contributed one great improvement for the branches page and the performance of this page will be greatly reduced, greatly improved. Basically we'll do a lot less calls to Git. And so that will be a lot faster. So you will see that in the coming 10.2 RC. And thanks to Zegalyan, we also have a new, you can categorize your changelog with the performance type. So that's useful. About QA, as I said, we will focus more on QA this quarter. So thanks to Robert, we now run the QA scenarios for, so we create projects in subgroups of a top level QA group basically to not pollute the projects list. And the point of that is that we plan to be able to run QA against staging. So it's better if we can just create all the testing projects under a top level group. And a great iteration from Richel, from the build team is about testing the integration with Mattermost in using pclub QA. So I've leaked there that it's composed of four steps. And as you can see, the steps are one step in the QA project, one step in the C project and so forth. So that's a great example of iteration. And really thank you, Richel, for that. And we've built upon what's been created to create new scenarios, to test backup restores, container registry, et cetera, et cetera. And yeah, we had approximately 10 QA merge requests. So github QA is getting more loved. So that's really great. And we'll make it more useful and more complete in the upcoming months. And also hopefully we can run it on every merge request. True automation, so we know run the automation daily and it's working great. We still have problems, performance problems with our API actually. So we cannot do everything that we need to. But we will try to improve the API so that our TrueAge automation can do its work. And yeah, we plan to make the TrueAge automation a gem to be able to use it for any projects in the future. So yeah, Q4KLs. So for Q3, the problem with that, we had two big, vague objectives, too many big, vague objectives. So now we try to have more precise key results. So we will ship a script, like we will improve our seeder script to be able to populate development databases with a lot of data. And thanks to Zagarian and James, we will make great progress in the next following weeks. So that's great. And then we will, as I just said, we'll make the TrueAge project a gem so that we can use it for any projects. Then we will implement three new scenarios in GitLab QA, so to test the container registry, test the upgrade from CE to EE and test a simple push. So basically this is to test the Q4KLs so basically this is to test that when you push a commit, it triggers some actions, it has some effects in GitLab and it's worth noticing that other teams will also contribute scenarios. So for example, the backup restore that I mentioned before will be contributed by the platform team, if I'm not mistaken. Then we will try to deduplicate at least five redundant feature tests because we have a lot of tests, which is good, but I suspect that we also have a lot of duplicated tests and that's a waste. So we try to identify and remove redundant tests. We'll also improve the top five longest spec files in terms of duration, of course. So we try to improve them by at least 30%. Then we will investigate codes with less than 60% test coverage and add tests for the five most critical files. We'll also investigate a backstage improvement to encapsulate instance variables into a single object, basically to try to stop the madness of having a lot of instance variables passed through the views. So is this really an investigation? And then we will also try to reduce duplication so in the views, in the forms. There is a great work that has been done by Nick already and we will try to build upon that. And we'll also solve at least three outstanding performance issues. So we already solved one for the branches page. So we solved two more at least. And I think that's it for today. So if there is any questions, let me know. So there is a question I said, what do we need to do to get the pipeline under 20 minutes? I see it is under 20 minutes sometimes. What does it take to get it under 10 minutes? So if we take the graph that I showed earlier, this one, this graph is only about the tests. So it doesn't reflect the whole pipelines. But so if we want to get the pipelines, the whole pipelines under 20 minutes, it will be challenging because it will mean that we'll have to also split to basically balance not only the test runs, but also other jobs that are taking more than 20 minutes. I don't think we have a lot right now, but we may have some. But for the test jobs, there are a few solutions, of course. We could increase the prioritization. We could also maybe investigate using even more powerful droplets, but this will have a cost, of course. And, but yeah, I think increasing the prioritization is obviously a good way to reduce the duration. We just have to find a balance between the cost of a pipeline and its duration, but that's totally possible. And yeah, under 10 minutes. Yeah, I think if we want the pipelines to be under 10 minutes, we'll definitely have to improve other jobs and not only the test ones. Five minutes, I don't know, five minutes, I don't know. Can we switch to spot instances? I don't, I'm not aware of, I'm not familiar with spot instances. So I'm sorry, I cannot reply for this one. And the last question from Gabrielle, what are we trying to optimize the pros and cons of our rights? Like in your own data-consistence mechanism? Yeah, there were some proposals in the past. I recall one from Nick. One was to use a file system that is basically using the RAM. So it's a lot faster. But yeah, we could also try to play with Postgres, but we should also keep in mind that we are testing against MySQL also. So it would mean a lot of work also for MySQL. But if we could create an issue, I should have ideas about that. And yeah, thanks for the link. Okay, thanks for the link. I don't think we'll be able to reply right now, but I can create an issue to discuss this possibility. Okay, if there's no other questions, I will give you 13 minutes back. Okay, cool. I will try to follow up on that. Thanks. Okay, have a nice day and see you in the team. Bye.