 Okay, I feel that we can start. Here should be able to see my screen. I will just now make it present mode. So welcome everyone on Functional Group Update for the CI CD team. Today, let's maybe start with the accomplishments. As usual, we did crazy amount of the work, not everything. For 9.3, we had almost IT mergers over 70 issues closed. It's like basically crazy amount given how much, I know how much everyone that's focused on getting our things being shipped. We actually deliver some very cool features like the pipeline graphs for multi-project pipelines. We also are working right now on finishing artifacts downloading with job token. With the help of Dimitri and Filipa, we did have the first iteration of GitHub code quality. So thank you very much for jumping in. With 9.4, we also have an option to store artifacts on object storage. So we start migrating our data away. And we actually did some work on the front end, some work on the back end. And we also learned, unfortunately, a little in quite unpleasant way, a lot from our database problems, on which I will talk a little more in a few seconds. And which is like the, I think our biggest accomplishment is those having SINIA join us as intern. So welcome SINIA. And also we have SINIA CICD developer joining on the 14th of August. So we are just looking forward to the new manpower basically. But if we talk about the lowlights, as usual, not everything goes as it's planned. Like one of the lowlights is that we make a lot of changes to the job log, but they didn't work out as we planned. So we are actually kind of working now on backing up from the decision and going to the previous design, which seems to be much better than what we have now with this in-dev scroll, instead of like full page scroll, which also showed that there is like tomorrow's problems with the new navigation bar that is right now present on death. So something that is Philippa right now, pretty much finishing, it's pretty much finished. And we actually covered some of these changes for 9.4. And we also spent crazy amount of time, you probably heard about these issues, already on retrospective, on the database migration. We actually going again, trying that again, but like changing the approach after having very extensive discussion on database and actually being able to reproduce all deadlocks, all problems and also understanding why that happened in the first place. But we actually, not really, but we will not be, we will be right now doing this search again, but using the newly introduced background migration. So we just migrated data, but this migration will take hours or maybe even days if it's really needed. But also like looking for 9.4, what is happening and what we plan to ship. It's actually really crazy because it seems that we did schedule a lot of updates for variables. We are working on pipeline schedule variables. I know that the build team is looking very into it. Graph-level secret variables is also something that is very requested feature of the community. And also working on impriming feature, environment-specific variables, which in general basically improves the security of the GitLab. Also GitLab Runner, due to like problems that we face recently, we are introducing better timeouts, introducing cache policies for GitLab Runner. Actually cache policies is being implemented by someone outside of the team. It's the NIC. Thank you very much NIC for doing that. And also extending our document configuration to support services, entry points and services aliases. And also we are working on the a lot of real-time stuff and other security improvements. But what is actually interesting also for us and our work, our daily work is here on GitLab.com. So we had very ambitious plan of this issue, CI production readiness. It's very ambitious, but we actually have quite good progress of that because we improved a lot in how we handle auto-scaling and how our system is behaving in case of failures. And we also almost finished creating base image that will be, our image that we'll be using for every machine that is being provisioned. I will not go into detail what exactly this image will have due to some other reasons, but it will include some security, additional security measures to actually make sure that we are aware of what is happening on from the infrastructure point of view on our jobs. And then actually when we have this base image, this is like one step forward and we have the Prometheus monitoring for every job that is running on GitLab.com. It's really cool. And I believe that we should have console pretty much ready in one or two weeks from now. And we're ready to start some tinkering with the Prometheus monitoring for auto-scaled machines. Why it is important? Because sometimes we face problems. Recently we faced problems with the Cache server. There were, you actually see there is four different issues with the different outages of the Cache server. Basically the problem with the Cache server was that there were a number of misconfigurations that we improved, that the server was not really being scared to the load that was put on it, but also the way our owner did work with the Cache server was not the best because we did not have timeouts in the place for our Cache operations, which in turn make it our release process much go much less smoother because we had the CI jobs being stuck on fetching or pushing caches, making configurable timeouts, using rate to increase the Cache server capacity. It is a rather showing that it's just helping. And also Cache policies. GitLab C is a very unique case where we have over 50 jobs. Each of these jobs actually pull and push cache, which is very wasteful to be fair. So we've the release of new GitLab Router version. Basically only one job will push the cache, but every other job will only fetch the cache, not really try to update it. But we also prepare a lot of changes for HA for the Cache server and monitoring to be actually be aware much sooner about potential problems and not really be notified by our customers, by community, or anyone from the team that something is not working properly. But Cache server is just one of the problems. Though there are problems, it's just cryptocurrency miners. I will not go into details because there is like a lot of very, there is like detailed description about what we do, what is the cost of planning and what are like midterm plans for solve the Bitcoin miners. Basically, to be fair, it's not solvable fully, but implementing number of measures, some of them are already in place. We can make it much harder for people abusing GitLab.com. As you can see, this is the graph from the seven days. This is one week. And I would say, guess, when we implemented some measures for Bitcoin miners. So at some point, our GitLab.com short runner's capacity is right now configured to 800 jobs at single time because we actually often upgrading GitLab Render Manager, we are not always having this 800 capacity. But as you can see previously, it was very easy actually to saturate the short runners with Bitcoin miners. And after just introducing this change, our expected numbers of short runner beats running on average is around 100, 150, but not 800. That's like the main difference here. But like going outside of the GitLab.com, I know that a lot of our customers is asking for Jenkins integration. And I know that GitHub plugin for Jenkins is not really maintained by anyone and it still uses GitHub API with free. But we just focus on making sure that GitHub plugin for Jenkins, we continue working after 9.5. So basically this is when we gonna dismiss API before v3 and replace that with v4. But actually if you go to these two issues, we want to change a little how we work with Jenkins. First of all, we want to change our, sorry, first of all, we want to ensure that people using GitHub get very good experience with using Jenkins and they choose GitHub CI because GitHub CI is simply more awesome, not because they are just limited to use GitHub CI because Jenkins, it's not maintained by us. So we'll continue supporting Jenkins. We have quite, I believe decent long-term plan how we want support Jenkins in the best way possible to make sure that to some extent Jenkins is on par with GitHub CI and just propose this organic growth of GitHub CI, not really like the first one. So if you're interested about GitHub plugin development, please go to these two issues, you will find more information about them. And I also have one announcement. Jenshin, we'll be joining the Edge team. Thank you for your hard work, Jenshin for the CI CD team. We kind of thought about that because Jenshin for some time was doing a lot of backstage stuff, stuff that was about the CI performance, our testing performance, a lot of about community contributions. And this is actually what is the Edge team doing. And we thought that Jenshin would be like the great person to be CI injected specialist into Edge team because Jenshin actually knows very well about all like backstage quirks of how GitHub CI is working. And it seems that Jenshin work focused more on backstage performance, test performance, stability performance would help everyone in the company, not only the CI CD team specifically. So Jenshin, congratulations. Thank you for your work on the CI CD team and good luck with being Edge team specialist and keep like pushing changes to CI still. I believe that's it. Do we have any questions? Andrew stumbled across the other day looking for GitHub plugin maintenance. But Jenshin, we are still welcoming you on our CI CD team updates every week. Okay, so thank you everyone. Have a good day, see you on team update.