 Great, we'll go ahead and get started. So thanks to everybody for joining the Quality Team, Functional Group Update. This is the possible agenda items for today. We'll go through the list and then we'll end up with questions at the end. First of all, team updates. We have made one hire, an offer has been accepted Senior Automation Engineer. He's starting on June 18th. He's also part of the Selenium WebDriver Project which I'm very excited about and going to have more Selenium contributors and get some leeway there as well. We have a strong pipeline. We are interviewing four more in the pipeline. Christian has moved over to the Gidley team and Robert now taking on the release coordination role. He is still spending 20% of the time on quality. I just want to call out the releases. We merged a smaller amount of MRs when compared to 10 out of eight. Let's do investigating why this is the cost. But the number went down. The next item on the list is 11.0 release will be one of the most critical releases for us. There are many, many things coming in at the same time. I wanted to call this out. I think the team is doing a great job keeping production up. But in addition to that, we have 350 MRs alone for RC1. This is the biggest I've seen so far since my time here. The GC migration is happening. Moving to GILab is also happening as well and the performance of production. So keep your eyes peeled. This is one of the critical moments for us and the release. Accomplishments from our side. The Q-Way task is now a bird's-eye view for everything that's going on to release. We tried this out in 10 out of eight. We're doing this again in 11.0. It's still on ideal and it will not scale. I'll touch on this in the future I'll follow up items later in the next slides. The next is we started embedding quality teams with new features and test planning process where we were able to do GDPR, DCP migration and SAML. We still have a lot to do, but we're starting to embed more people and more test planning early in the game and not towards the end. We also, thanks to Jenshin, we have a Hamel render helper to help with separating concerns for CE and EE code. That's in the Hamel file. And we have CI jobs to check for EE and CE code organization for files and location. Last on the list, so quarter two, 2018 issue bash happened last weekend. Thanks to Mark, we're still determining the winners for that. Things that are in progress. So the completion of EE and CE files, we're tracking in this epic. This is roughly 55% done. I think we're almost done with files. Next up is Hamel and location of files. We're still working closely with other teams for this. I'll touch this on in the next slides as well. The next item, this is also related to OKR to deliver the first iteration of dashboard. So we've been blocked on the Looker ETL access, but we are now looking at to any backup plan because we have a view on prototype already up and running. Release process improvements. So we have automated the QA task list for the release, thanks to Mark. This is used in 11.0.0 RC1 and you see all the labels and all the names and also the MR, the names of the merge requests listed now instead of a SHA. And we will continue to iterate on this for the rest of the, for all the RC QAs. We're also looking to the barrier in checking out the QA tasks. You can click on the discussion issues there. We wanna try to make this as easy as possible and figure a better way to solve this because this would not scale. Given that we have 350 items in RC1 and that list may grow faster if we continue to shift things at the speed we are. The GTV migration failure rehearsal. So just a reminder, what happens every Tuesdays and Thursdays, 1300 to 1500 hours through TC, we're according to QA task and we're continuing to close out the test automation gaps. Thanks to Raymond for that. The last item that we are working on, we are doing the CE and EE issue triage rotation. Currently it is only on Mark that's handling all the load right now. We want to fan this out to the rest of the team in the future. Right now we are doing rotations in our team alone and we are also automating a triage helper that will automatically ping respective engineering managers and product managers when they are on the right labels. The roadmap for the team. So as I said before, the QA task would not scale and it would never scale. So we wanna enable teams to validate faster before changes hit master. So going forward, we wanna implement the new apps for CE and EE. So in the future, when you check up a QA task, this should happen in your feature branch, in your review apps environment and not in master. Things that come in master should be working and we should move, we should shift left and then test faster. We also want to run the automated regressions against review apps or feature branch environments. Currently it is optional right now, but we're looking to make this mandatory for every merge request that comes into master. And we continue to show up the test automation gaps and the QA, the tracking is in this issue if you're free to click and check it out. We also wanna make test planning part of feature planning because we need to ship integration and end-to-end tests. We need to ship the whole test pyramid with a feature if you want to move to true CD and remove all this manual checking and staging and all this ceremony going forward. Last but not least, as I said before, we wanna fan out charging issues, not only issue issues, but merge request as well to the whole team. So we're considering embedding this in the reaction rotation that Tommy has proposed or maybe a weekly task for the team, but these are still the discussions going on. Please feel free to weigh in your discussions if you're interested. Challenges, so currently quality is down to 3.5 person and we're still taking on a lot. We're doing all that we can right now and the way to fix this, we need to meet or exceed our hiring targets. The quality and test automation is currently not involved in all feature planning. This will be better as we exceed or meet our hiring targets so we can allocate more test automation engineers for a feature team, but that's way down the roadmap but we need to make our hires first. So the EEN code organization, the quality team has the big picture but we're not experts in the backend or front-end areas so we still need to work closely with the front-end and backend engineers to close this out. Thank you for everybody that has contributed and helping us out. The last, the list, so we are reverting to our V1 prototype while waiting for the ETL look at dashboards. This is built on charge CS and that's, we have a good feeling that we can finish this by within Q2. Questions, John, I see you have one. Are these best practices that we believe should be done for customers? Could we make it part of our auto DevOps? Are you talking about embedding quality teams in feature teams? Which part of the... I was talking about the ability to do the, doing the review apps. Right. And could we just sort of make that a standard part of our DevOps, auto DevOps process that we spin up a review branch as part of our DevOps and have customers test in a similar fashion to help them shift left? Yes, definitely. So these are called feature environments in other namesake in other companies. So yes, I believe so because before things get into master, like the true CD world, you need to make sure everything is green. Like master should always be green and testing before master is always a good practice. So yes. Paul, do you look at support tickets as a quality metric? No, we currently not. That is, we're collaborating with the support team for any support requests, but currently we do not. Do you have any suggestions there that would you like to give us or any concerns that you've seen in the support? No, I'm just wondering in best practices from a quality perspective, typically we see quality issues will result in support tickets. So I was wondering if we look at trended data or anything like that to get a gauge on whether quality is getting better or worse. Great question. So we are doing that for, not for the support tickets, but for our issue tracker. And that's the dashboard that I mentioned earlier. So we want to track that. Correlating with the support ticket is definitely something that I think would be great if we could do. I'll do a reach out to the support team to see if we can, is there anything we can do there to help looking at the support tickets? But we've seen a trend of, so the numbers may trick us. We've seen a trend of five support tickets duplicating one issue tracker tickets. So looking at the numbers alone might not be the truth in the numbers if what you're seeing because it may be reported for multiple customers, but it boils out to just one issue in our bug tracker. So just a heads up that's a conditioner. Anybody else? Okay, great. I'll give it five more seconds and then I will end the call and give 20 minutes back to everybody. Great. Thanks guys. I'll catch you guys in the next FGU. Thank you so much.