 Excellent. Yeah, so let's go ahead and get started. I am clearly not Remy, but I'm filling in for him today. He's on vacation. So I'll be giving kind of an FGU on both the Edge team and what they've been working on as well as kind of a larger quality team. So let's get started. I will share my screen. All right. So, yeah, everyone can see slides. Yes. So, so yeah, quick, quick overview. I'll go through kind of general organization around quality and how the teams are broken up and what people are focusing on. I'm going to touch on the slides. Oh, you can. Okay. Oh, I know what's wrong. Now looks good. All right. Thanks, guys. So, I wonder if you could probably also see all these windows probably huh. So, yeah. So teams and quality. So we have kind of two major areas of focus in the quality department. There's the Edge team that most people are familiar with. That's kind of improving the development process, color based maintenance, things like, you know, how do we separate CE and EE and make that clean, sort of more DevOps, things like that. And then something that's a little bit newer, which is test automation and QA, and that's improving the quality of the product with end to end automated testing. And while we've been trying to build out a team to own that and separate it out so that we can give it the attention and focus that there really needs. Obviously, you know, we have a small team and the Edge guys have been spearheading that. But to that end, we recently hired next to three to build out that team, hire more people to focus on test, test automation and QA and lead the automation of all of our end to end testing. So, just a quick welcome to mech. I believe he was able to join us for this. And we're really excited to have him here. He has a ton of experience. He's been with us at other companies. He's really passionate and outspoken advocate for quality in the Bay Area. And we're just really excited that he joined GitLab. So, similarly on that note, the other thing that we've been focusing on over the last few months and many of you have probably seen it come by is are the releases and improvements to our release process. And I was going to kind of boil that down and do one mission statement for what we're trying to improve. It's to avoid outages and major customer issues on GitLab.com by testing new features and bug fixes before they are deployed to production. So hopefully this isn't like earth shattering to anyone. This is a pretty common goal for most software companies. And what we're doing now to try to address that is running existing automated end to end tests, either manually or within pipelines. We take the kickoff document that we create kind of at the beginning of any release and we revisit that as the first RCs come out and ask the product team to manually validate those features at a high level or tell us that something didn't make it or something's going to take longer. And then we come up with a change list of all the bugs and other improvements that have come in for any given release candidate and ask the engineering team to validate those changes and make sure that tests were written and tests were run for them. And then most importantly, we test in our staging environment, which let us test our features and fixes in kind of the way that a customer would approach them, but in a production like environment that's kind of isolated from production. So we're able to iterate more quickly there without subjecting customers to things that may be half-baked. So while this has allowed us to find some issues and catch things before they go out to the customer, it is not a scalable process and it's definitely limiting the speed and efficiency at which we're able to deploy things to production. At least where we want to go next in the short term is have our automated tests run against each merger request automatically and make sure that the results that are generated are easy to access, easy to interpret, and trustable. You know, if you can't trust the results and you can't interpret them then the tests are useless. We want to make sure that new tests get written alongside features and bug fixes and that they're easy to write so that the whole company and at least the whole engineering team can contribute to that. And then ultimately what we want is high test coverage and reliability so that we can focus on letting releases be dictated by the complexity of what we're trying to develop, not the complexity of our release process and the tools that we're using. So those are kind of the major things that we're trying to accomplish with this release, with these release changes over the last few months and where we're heading next. And if anyone has more questions about that or wants more info on what we have planned, please reach out. You can also take a look at some of our OKRs which are written around some of these improvements. And let me just kind of quickly look at chat. See if anyone has any questions. All right. I've completely lost this thing. All right, maybe not. So, moving on to accomplishments. So we're welcoming one new member of the core team, Jacopo. We merged 74 merger requests from the community in 10.6, and that's down from 90 in 10.5, but it's actually an increase in the percentage of community merger requests out of the total. So that's a great thing. And kind of a big shout out to the front end and back end teams for for reviewing and merging all those are doing a fantastic job at it. And also big thanks to Robert, Luke and Marin, who did a great job on the 10.4 release and really cemented some of the best practices that they were still using in our release process. There's a lesson in there for being too good at your job. Robert is graciously accepted, handling the next few releases along with James so that County has a chance to make some longer term improvements to release tools in their release process. We merged 73 edge related merger requests in 10.5. And there was a larger team that contributed to that within the back end as well, which is great. And then, at least as a sort of halfway through our Q1. We, we actually achieved greater than 50% of our Q1 OPR is in less than 50% of the quarter so so that was a good pace for that. We'll revisit those now, at least when he gets back and we can check in and see how we're doing towards the end of this quarter. So this is the kind of next two slides or overview of our okay ours I won't read each one and just sort of touch on the highlights. We did earlier in the quarter, complete the work to make it lab QA production ready to let PAs are automated test framework for end to end testing. And there's a huge team that contributed to a lot of aspects of that called out there. So big thanks to those guys. We were able to contribute a few more tests to it. As I mentioned test coverage is still pretty low and something we want to improve on and keep an eye on in the quarters to come. On the edge side, one of our biggest undertakings this quarter is is trying to separate out. We kind of made about 50% progress on files and a little bit slower on lines of code. But we're slowly chipping away at it and we're able to kind of start progress and a lot of these edge okay ours, and we'll have to take a look at how we're tracking to that. For example, we're down to like 30 to 35 minutes of C pipeline thanks to some improvements. And there's obviously room to go there. So, some general heads up. Mark was our release manager for 10.5 which affected his progress against some of his goals for this quarter but should be able to turn back on to those now. As I mentioned Robert's going to be part of the release team for the next few releases. So, although that will kind of win it what he's able to do elsewhere. Hopefully it'll be a good chance to take a look at some of our release tools. And then lastly, this was an idea that Rami proposed. And he created this quiz that's sort of a way to like reinforce and cement some of the deluge of information you get as you go through the kind of onboarding process. At least within engineering. And it's just kind of like a test your knowledge quiz on some of our best practices and guidelines for contributing and viewing. And you know what you do and the bill breaks and stuff like that. And it's just kind of a fun way to reinforce that information. There's not anything we're really tracking or doing with this data it's purely for your own sake you can see an answer the questions. See how you did go back and try it again get some links on where to go for information. We're not like, you know, selling your data or destroying democracy it's just a kind of a tool that people can use for themselves. It's, if people find it helpful and useful will definitely expand on it and maybe incorporated into like the onboarding checklist that people follow at least for engineering. That's all I had so thanks for your attention. I will attempt to check the chat so I can get back. There you go. I could kill that. So, yeah, let's see if there is any questions doesn't look like it feel free to jump in though if and grab the mic if you do have one. And like I said, if anything does come up and you want some more info on what we're working on and quality please reach out to myself. Go say hi to Mac and yeah have a great Tuesday guys or whatever day might be for you. I'll give it a couple minutes. Oh yeah, please go ahead. The onboarding quiz is it like the general onboarding or release onboarding or. It's engineering specific really right now it covers a lot of what's under like kind of the engineering workflow and some of the other engineering onboarding docs. You know, if other teams want to kind of adopt that happy to help them do it. I mean, but you know, like examples of questions are like if the bill breaks, you know, what, what do you do, or, you know, what are the criteria for for merge requests and things like that so. Yeah, it's engineering specific, and it's trying to just sort of reinforce some of those best practices. Do we have a reward if somebody gets top score. My wholehearted thanks and appreciation for now. We'll see. So question on chat to the end test measure performance as well. Yeah, great question. Definitely something we want. And you get some of that I guess for free as as you look at test results you get to see your runtime and things like that so you can at least see if how long your tests are taking. And we would definitely love to have more stress based tests performance based tests that we can track and trend, and then specifically test the performance of get love calm and make sure that's improving. Anything else. Cool. Alright guys, have a great day.