 Hey folks, good morning. Just wanted to check that you guys can hear me and see my screen and give me a thumbs up. Awesome, thank you. We'll give 30 seconds more past the dot and then we'll start. Okay, I think we have a critical mass here. So I think we'll just go ahead and get started. First of all, welcome to our quality functional group update. So as you notice, I'm not BJ. More explanation on that in a second. My name is Mech, I just joined about a month and a half ago as engineering manager on the quality team. And this is our agenda for today. So we have some updates on the team, accomplishments, the releases, the OKRs, and then concerns, and then the Q&A in the end. But first of all, I wanted to update that BJ's contract with GitLab is ending at the end of this month. I know some of you may already have known this, but I want to say thank you to BJ. He's done a great job starting the quality initiative and making improvements to the quality release processes. But his contract with GitLab is ending and this week is his last week at GitLab. The next up is Remy has decided to transition into an IC role. He is now a staff developer, and I also want to say thank you to Remy. He's done a fantastic job leading the edge team so far and he will continue to be a very strong leader for the quality and I'm very happy to have him in the team. So thank you Remy. Next up, team updates. So the quality and edge team has merged. We are now the proper quality team. The high level goals of the department can be found in more details there. I'm not gonna go over it in detail, but these are the high level of the goals. Well, first of all, we need to ensure that GitLab consistently releases high quality software. How is to expand more on GitLab QA, make it fast, make it reliable, easy to run. We need to close all the test gaps, expand the test coverage. There's so many things that we need to test. For example, email, Aladap integration, not to mention more collaboration with a security team for penetration testing and improve the release process. Next up is we want to improve our developer efficiency and productivity. This is jump-started by our engineering metrics dashboard. So we look at metrics on how we're doing MRs for release, what are the trends of the issues being filed that many S1s or S2s during this given timeframe and why, and we're gonna make suggestions to improve our engineering operations. And lastly, we want to ensure that the GitLab code base maintains a high bar of quality from external contribution. So reviewing community MRs, we can show that the test GitLab QA can easily be run in a contributor environment outside of our company and make sure that they test their stuff before it comes into our contribution pipeline. And we will continue to expand a team with test automation developers. Accomplishments so far, so we did successfully normalize the bug severity and priority labels. And this normalizes the teams on the same unit of urgency while this is important. When we say urgency and priority, we want engineering to stay and mean the same thing. If it's an S1, if it's a P1, we want the meaning to be the same across the board. So we can track our execution, we can track whether we are delivering on our promise to our customers, say if it's a S1 or P1, we should be delivering it now. If it's a P3, it should be within next three releases or a given quarter. We're also working on deprecating the existing SLSP and AP labels as well. And a special thanks and shout out to all the people here on Dawei, Sean, Yorick, Remy, Jenshin, Kathy, and Lee for reviewing this process. And it's now taking effect. Thank you. Next up is we start to codify, they codify and document the release process with Marin. And we want to improve the validation gap and the feedback loop. I think the release process and no knowledge that gets passed down, we need to write it down, have it be codified, have it be signed off, and then define the personas and releases and also the types of the releases as well. This is also the same theme that was covered in Kathy's functional update yesterday. Per release metrics. So we release 106 MRs from the community in 10.7, so great job. And it's increased from the last release. And Robert continues to be a crucial role in coordinating the release process. And we still expect to see him in this role in the near future, and especially crucial when we migrate to the Google Cloud. Apologies here that the emoji is not showing correctly, but this is a temperature red zone. So 10.7 has had its hurdles. I think everybody knows about this. Later RCS has been unstable with GoDevoter coming in and more on this. It's going to be touched on in the retrospective and a special shout out to the people holding the fort here, Marin, Philippa, Mira, and then James Lopez. Updates on our OKR. So we sign up for many things per quality edge team in Q1. And in Q2, we are more focused and more strategic and also respecting our current resources. We sign up for delivering the first iteration of the engineering dashboard charts and metrics. We want to complete the organization of the EE file. So moving forward, engineering can move faster. And thank you, Andrew. A shout out to the product team also as well for holding the fort in the release. Thank you. So yes, going back, we want to complete the organization of the EE files and directory. So moving forward, we know an EE feature is coming in. We know exactly where it is and it's not scattered all over the place. And it should be helpful in increasing engineering productivity and efficiency going forward. And also, we're working hard on hiring. So we want to try to source 100 candidates and also hire two more test automation developers. And thank you, Nadia, and the recruiting team for all the help here. Oops. Okay. Thank you. Next up is concerns and where we will probably need help on. So we've seen customers asking for an audit process of good lab quality and the release process. And this has been, I see this coming more often down the line before we do a presale, we need to qualify as their vendors. And they've been asking us to open up our house to show them how we develop software as in walking them through the requirements, how it goes down to the release, how we check off those features in a release and how we validate that, hey, we are actually verifying that this feature is working before it goes into our customer's hands. The same with bugs and the resolution and tracking as well. They're asking us, hey, how are we tracking severity and priority? You can notice the theme here and how are we tracking against delivering on those timelines on severity and priority? So we're working on a playbook for this. I think Mark has shared the name of the customer here. So yes, we are using this case as a scenario and as an example to start codifying and make sure that this information is available going forward. And it is, we've preached what we say. The next step is, oh, another point I want to underscore here is we can no longer be lax around the release process, meaning the bugs that we promised that we fix, we have to verify before we sign up on a release because if we show them the RC or the release task, they ask, hey, why isn't this box checked? We should have a good explanation for it and we should close the feedback loop there. The next step, put a proper quality team. We may be moving task around as not related to the long-term quality roadmap and goals of the team. So you see me talking to engineering leads around and trying to get this distorted out. For example, off IH&N and security and stuff. So these needs to be allocated to our domain experts on the topic. And that's the end. Thank you for listening. Any questions here? Hey Tommy, thank you. Yes? Because you mentioned that we can't be lax on the release tasks and validating the features that we fixed or built and so on. But sometimes we can't test properly because the environment's not matching up with what clients are going to be using or this kind of stuff. Do we have an idea already and how we're going to make that easier for us? So there are a few discussions going around where we want to set up staging with the testing capabilities the same as canary and pre-fraud. So it's available for people to do the testing on. So that's being worked on as we speak right now. So you see more themes like this. The next step in our pipeline is setting up review apps for CE. So in a merge request, we have a review app environment for you to test your changes in. And in addition, we have to work on our testing tools. For example, I was in discussion this morning where we're testing out app and SAML. So we have to work on giving you guys a SAML setup tool that's easily available for everybody to test. Tom, I'm going back to Tommy's question here. So when I said proper quality team, what I wanted to say is like per our history, right? Like a lot of things has been pulled into the edge team helping out with things all around. I think that's great. I think we want to have a culture of collaborating and helping other people. But we also have to maintain a focus on quality and what we have to invest in to make us run faster and deliver on quality. And those are also important tasks that the team has to deliver on. And this is going to be tied into the next OKRs coming forward in the quality team. And what are them being? Rewapps on CE, for example, which points to Clement's question. No one I have a timeline on this yet, Clement. I have a meeting with Josh this week to discuss this. Hey, Mac, this is Victor. Thanks for the updates. As the quality team grows and the infrastructure and tools grow, in the past we've said that we're never going, we're not never, but we don't believe in dedicated people doing testing and then the developers and I guess prime managers are doing that separately. Is that still the philosophy and the quality team is empowered or the mandate is to provide the tools and resources to help us do that? Or will that philosophy change and is the quality team actually doing testing themselves and whether it's automated or with manual? So great question. So no, the mandate is not changing. The same as security, right? I think quality is everybody's responsibility. And I think the quality team, how I would phrase it is, quality as a developer productivity. So we focus on making quality easier to achieve by building tools, by automating stuff. I think going forward, quality is still everybody's responsibility. We appreciate people contributing to the test and that's how it should be forward, right? The people who are developing the feature has the tribal knowledge of what are the weak areas and they need to collaborate with the quality team. Hey, what are the tests that we need to add? Do we need to add unit tests, integration tests, UI or even visual test in the pyramid? So that collaboration has to be there and quality is everybody's responsibility and that's the path forward. Right, and then do you see your team evolving to set certain, like you mentioned, external forces are saying like they want certain processes or documentation. So do you envision that the quality team will also push for certain processes or standards in terms of quality and testing that individual folks should adopt, I guess? Is that- That's a great question. Yes, so it still has to be a collaborative manner but for every enterprise customers when they want to audit something, right? The first team they'll look at is a quality team. Hey, let's talk to the quality team. What's going on in the other side? So we are kind of like the first line of defense there or interface to the customers and then we will point to the necessary information and people. Great, thank you. A pleasure. Any other questions? Great questions, by the way. Cool, so I'll give it a countdown. And lastly, I wanna say good luck and thank you, BJ. I think we appreciate all his contributions so far at GitLab and we wish him the best of luck and he's gonna send out a note later this week. Great, well, thank you everybody. I'll give you guys 15 minutes back. Thank you for joining us.