 up. Thank you so much. So great, it's been a while since we talked. We have a lot of exciting updates to share. This is our proposed agenda. First of all, team updates. So since the last time we talked, we have onboarded two new team members, Ramya and then Sennad. Ramya is from Amazon PayPal, and also Freshworks. You may have seen these locals before, and Sennad is from ThoughtWorks, which has a really good, well-known community and knowledge around test automation and the test pyramid and also the home of Martin Fowler. So we are very excited to have them on our team. This is the current roster. So now we have started defining member specialties and stable counterparts to our teams. Our current new hires, Dan, Mark, Ramya and Sennad are currently assigned to DevStage, and we still have two more new hires that we are looking to assign to DevStage as well. Moving on, so updates on Q3 OKRs, which is student progress. So implementing the review apps for C&E, we're still tracking at 45%. There's some challenges that I will highlight towards the end, but the implementation of the CE repo is almost done with our environment cleanup policy. Triging and adding proper priorities and labels to bugs is roughly 80% done, which is the area that will be made the most progress on. I'll also be highlighting some in our charts after this as well. Trige mechanism has been implemented. This includes GitLab, bot mentioning and reminding people to try the issues. We have also implemented a Trige package that we will also show later in the update as well. For sourcing and hiring-wise, we are tracking 75%. We source around 70. We have two, and we're looking to hire one more if we can make it in time by Q3. Other accomplishments. We started an issue grooming assistant to track missed deliverables and also regressions and bugs, thanks to Jenshin. You can see this in the form of GitLab, but reminding people, hey, if you added a regression label, make sure you add an effective regression label. If an issue is marked as a deliverable in its past milestone date, we will automatically mark it as a missed deliverable, and then we also track which milestone that has been missed as well. We're continuing to iterate on GitLab Insights dashboard. The current accomplishments we have is we have now implemented top-level team views in summary for each team. This is available for C and E repo, so if you click the links here, you can also take a look at their dedicated views for create managed plan now and the metrics that belongs to those teams. We've also kickstarted implementing this feature inside GitLab itself. The kickoff meeting is available on YouTube. Thank you, Victor. And this is the first feature that we're going to implement as part of GitLab. Moving on, we've also implemented an automatic triage package. So if you are an engineering manager, you may have received an email this morning that, hey, these are the list of top 15 issues that you should triage for this week. So those are also our effort to scale the triaging across the whole engineering organization. So for every week going forward, we'll be assigning a triage package for each team for them to help us triage issues. And this is only 15 issues per week, and that's going to scale across different teams. There's a sample of the driver and output here. Thanks to Genshin. We have also set up a dedicated project for all things triage. So if you're looking to file a bug or also raise a feature request, please visit this project. And we've also initiated a test planning process for new features. Thanks to Mark, Sena, and Ramya. So we've kickstarted with this with 11.3. All the test plans for C and E are listed here. So if you could click and take a look, but we will continue to do this for 11.4 and also in the future. Moving on to things that are in progress. So we're still working on tracking our end-to-end test coverage. So this is our first step as a team to track all test coverage in the end-to-end layer. This is going to be a single source of truth to track test coverage. The link is here. The second iteration of BitLab triage is well underway. So we've now proven that the label works. Going forward, we're going to apply the version specific labels immediately. And then we'll ask people to correct them later. The rest are still in progress. So we still continue to build out our environment specific CI. Everything is done except for production and canary. And these are the projects that will run tests on production for functional monitoring. We're still working on transferring the ownership of the QQ runners from the distribution team and defining third-party services and also speed up the tests to run in parallel. Next up, we're going to take a look at the engineering metrics, the state of bugs. So we still have some open S1 in CE, but the good news is that we don't have any open S1s in EE. And the bugs for this month is stable and trending down. We're doing better than the last time we did as far as the MRE factory event. But it's still not the end of September yet. So this metric is still in progress. The links on the top available for you to go and take a look as well. Moving on, this is our progress to track triaging severity and priority. These are the metrics for July. The blue ones are the untrived ones or unsupervised ones. We're making good progress. The right-hand side is our current state. So we still have a lot of work to do, but we clean out bugs that are older than six months. And we're focusing on the new ones that are recently created or it's within the six-month timeframe. Moving on. So I'm going to let Mark Fletcher take over here and do a showcase of what we implemented recently for our top little team views. Mark, are you there? Yeah, I'm here. Thank you. So yeah, we recently started importing the issues for CE and EE and displaying these at a team level because we thought this would be useful for both tracking the severity and priority on bugs and then being able to break that down per team as well. But we do have the overall percentage per project as well. But the other things we started displaying were things like regressions per team and the misdeliverables per team because we thought that would also be useful. But any suggestions for new graphs are really welcome because we want these to be useful for each team on the team level. So the other things that we're trying to implement because currently we do this per project, but we know that some teams work across projects. So they might work in GitLab Shell or they might work in GitLab Workhorse and obviously both CE and EE. So we plan to combine these at a top level where teams can add the projects that they're interested in and then the graphs will be based off of that. And the other things for the team level is graphs representing technical debt and merge requests that have been merged per release. So we think those would be useful at the at the team level too. And maybe the bugs that have been closed down. So I'm going to just read some questions as well. But any other suggestions for team level or project level charts that you think would be useful, please raise an issue about that. So there's some questions. S1 still non-confidential. Yeah. So we don't actually track confidential issues in GitLab Insights. And then Sid's question, can we get merge, merge requests per person for the whole company? Yeah, I expect we could do that at a top level and we could include each project as well. We could do something about that. That'd be that sounds useful. Yeah, it's really important that we keep shipping as a company. So we need to do it both. And by focusing on misdeliverables, a knee jerk reaction might just be, hey, well, if it's bad to misdeliverables, I'm just going to promise less. And promising that you can always make your deliverables by just only promising 20% of what you're capable of. And work tends to fill the time that is available to it. So we should really make sure that velocity, the number and that cycle time and then the number of features shipped is very prominent in any trade-off we make. Okay. Thank you, Sid. We'll take that to an issue and implement as you suggested. Okay, that's it from me, Mac. Thank you. And as always, the team is always open for suggestions. So please ping us in quality. And if you want to see anything in this charts and any feedback, it's more than welcome. Moving on. So we've also initiated the test planning process for CE and EE. So I'm going to turn it over to Mark Lecure to touch on the new test plan format that we have implemented. Mark? Thanks, Mac. So yes, as Mac mentioned, we started rolling out test plans alongside development of 11.3. They're a place for focused discussion of quality and risk management with the result being test requirements or a summary of tests we expect to see across all levels of testing. There's links in the slide there to the discussion of the creation of the process and some links to actual test plans. Next slide, please. So writing test plans, it's easy to get bogged down in the detail of test plans and the results can be less than ideal, especially if there's too much focus on the details of the test, which is often at the expense of the value that the test provides. So we wanted something that was more efficient and produced better results. We've chosen the ACC framework, which was developed out of Google as part of an attempt to get testers to write test plans in 10 minutes. Now, from what I read, they didn't actually get there to that 10-minute limit, but when trying to do that, they identified aspects of, core aspects of effective test plans, and those are attributes, the qualities that the product should have, components, the major parts of the product, and capabilities, which is the behavior that links components and attributes. There's an example there, and if you have a look at the bottom of the slide, there's a matrix which aligns attributes and components. What you don't see is the capabilities at the intersection of attributes and components, but in laying it out like that, it encourages us to think about the product and what we're testing from different perspectives, and it leads us to quickly define important aspects of the change to be tested in a way that focuses on the value that the change provides. So when we write tests, they're valuable tests. And the capabilities that we come up with, they guide the construction of tests, they're high-level descriptions of the behavior that the test will verify, and the tests are intended to be at multiple levels in unit integration system end-to-end. So in the example there, I've got a feature proposal, a recent one, which was to allow comments on merge requests to be grouped together in batches. We want them to be reliable, so that's one of the attributes that it'll have, and the component, the major component of that feature is merge requests. And one of the capabilities at that intersection, and there'll be many other capabilities at other intersections, is just one of them, which is that reviews are persisted for the user and not just for the browser or the session, and that leads to a few different test cases, some of which, some unit tests will verify how persistence works, and end-to-end tests, there'll be one, for example, that verifies that behavior across browser sessions. So that's it for the overview. The process is new, so if you have any questions or feedback, please get in contact with the quality team. And that's it for me. Back to you, Mac. Thank you, Mark. Moving on. So these are the top challenges that I will open up for discussion as well. So the first challenge that we are seeing is the community margin being picked up too late. I've created an issue to discuss here below, and I think we need to define a better process and also notification mechanism, and also having people load balance, where if somebody leaves on vacation, we know we have somebody that can take on that area and move these contributions around. The second one on the list is, so we have around 350 new issues every week posted to CNE, so that adds up to 1,500 issues each month. So we really need help with basic triaging, and we've implemented triage mechanisms to find out to the rest of the engineering team. There's an issue to discuss further here, but bottom line is, this is everybody's help. The quality team can't triage 1,100 issues per month every month. We need to scale it out sustainably and horizontally towards all of engineering. The third one here is, of course, this is related to labels as well. Our metrics are only as accurate as our label hygiene. So we rely on people to help us ensure that the label is correct. If you're getting pinged by our GitLab bot, please help out. Number four, we currently have some open S1 bucks, which is roughly a month or two months old, which is fine, but I wanted to highlight we currently have four. So if you are the engineering manager that has this box on your radar, please work to resolve this in the timeframe. We do not want this S1 issue to be lingering for too long. The last one, I think we've mitigated it some part. So we won't be able to deliver GitLab QEA runs as part of review apps implementation. And we're going to pick this up as part of our Q4KRs. We're also pinging Marin's team and the distribution team to help us move this forward and deliver this as soon as we can. And with that, I will open it up to questions and discussions. And I'll highlight the two top concerns here from the quality team. The floor is open. So the top concern here is community MRs are not being picked up. This has resulted in some comments on Twitter. We have addressed the comment and we have been transparent and showed him we're working to address this, but I will welcome any suggestion here from the team to address this challenge. We have an issue out there. This is Dan, by the way. We have an issue out there that I believe is part of the issue triage repository regarding assigning any issue that's older than an amount of days to a random developer or quality member. So that's at least a start, I think, because that would be useful to have our bot give us random people these issues so they don't fall by the wayside. So, Micah, first of all, I think this is a great FGU with lots of links in it and a good overview of the work. So well done, you and Mark for such a great FGU. I wonder a bit, the community MRs, do you think we need more MRs coaches or do you think it's more healthy to have all of our developers from time to time participate in that and do what Daniel just suggested, assign it to someone? I think we need both sit. I think MRs sometimes can be a single source of failure where somebody goes on vacation and we need to make sure we have more than one or even three people on one area. And then we would love to have other people also contribute. I think all of engineering, everybody can contribute and then everybody engineering should be a champion of that value. So I think rotating new members that haven't coached before have them also participate and that will also help them with their career path as well as they get to teach other people. So David here, thanks, Mic for raising this one. I had some comments on the issue related to on the one that you've pointed on the FGU. I think these are great ideas. I mean, I especially appreciate the fact that this shouldn't be something that the quality team has to deal with. I like the idea that this expanding is to the whole engineering teams in the company. I do have a question though. I mean, even if we expand the assignments to other engineers, how can we make sure that still we have this regular cadence that the issues are picked up? I mean, everyone is busy, but we want to make sure that we respond to this community request. At least if it's just to say, hey, we've seen your contribution, we appreciate it. We're going to work on it. Thanks, David. Yeah, we have some mechanism and I think it's already linked in the issue. So I think it's currently at six months, right, Mark? I think we want to tighten it up to a month or even shorter than that where if there's an MR with no activity on it, we would pick it up and then notify the team. Yeah, we really need to reduce that threshold there because by that time, it's either quite out of date or the community contributors have forgotten about it. So yeah, we definitely need to reduce that. So John has a comment. John may have a comment around using NLP. So we do have something that our contributors have suggested. I think if you click on the link, and I think there is a project out there by a contributor that's using TensorFlow based on our issues that we already tried. He implemented like a machine learning service that can suggest issues. The team is currently working on it and we're looking to validate if this is accurate or not. But from our preliminary evaluation, it's roughly 70% accurate. So that's also another venue that we can pull this in. And it's a community contribution, which is even even better. Are there any other more questions or discussions? What's the average experience for people contributing to GitLab? I contribute to GitLab, but I've heard people rave about the experience. I don't hear a lot of negative stories, but it sounds pretty dire if you've made a merger crash and you don't hear anything for a month. That's a very bad experience. What is the average experience for people? I don't think we have that number right off the bat. I think we should probably have some metrics around what are the average time to close for an MR that's not been submitted by our team. But the comment from, I think this is one of our lowlights is, I think the comment on Twitter wasn't really helpful. I think he created an MR and it was close to two months that nobody looked at. We actually added the labels, but I think some of our teams were on vacation, which is also why I said that we need more than one person looking at an area. So it's multiple things that needs to go into help fix this. But for that specific scenario, it was two months until the MR got picked up by somebody, which is At GitLab, we always say product schedules the teams because we've tried where it's a boss is not always fun, but having multiple bosses is really bad. If you have product asking you this and then you also need to take into account helping community contributions doing a refactor, etc., it shouldn't be the engineers that have to balance that. That's the problem of the product manager. Should we involve product more in they have to basically assign this community merger class to the engineering team and say, look, this is going to be part of this release because otherwise we get this struggle. Like what do I work on? The work that the product manager assigned to me or the work that the quality team assigned to me or something else. We have to have a single person that weighs the priorities that knows well this month we got this super important customer thing happening. So we have to delay two weeks. But next month, okay, it's normal steady state. I'm going to make sure that every merger class is picked up within a day. Do you think of that? Yeah, I think that's great. Is it okay if I also add to the handbook where in the list of prior decisions we add shepherd in community contributions as part of that? Yes, I figured that should be on there. I just refactored those prioritizations. It should be together with new features and refactors. This is something that the PMs have to take into account. Got it. I'll take up the change to the handbook and then now I'll send it up for review. Thank you. So we have roughly seven minutes left. I'll ask around again if people have any more questions or discussions. Otherwise I'll end the call and then I'll see you on the team call. So there's no more. Have a great rest of the day. Have a good evening where you are. I'll see you on a team call and see you the next time in the next FGU. Thank you.