 Hello everyone and good morning, good afternoon, or even perhaps good evening. Welcome everyone to this month's frontend functional group update. I will try to dive in directly. If someone has questions, please post them in the chat or just directly jump in. Let's get started. What can I tell you about what is happening in the frontend team? The big thing that was already teased and announced last time is the big team structure update. So since last time we have now two teams in the department for the frontend. So the discussion frontend team is led by Andre and we have a team which is going over monitoring distribution and packaging, which is led by Clement. The rest of the team at the moment stays flat, but it's really the target that we scale in that area so that we have in the near future also more teams. What we have already done in the last release cycle and also this release cycle is that now these team managers are taking over all the estimations, all the planning, etc. And also all our domain experts. So the assigned developers that are currently not in a specific team are also taking over these tasks to actually take a look at issues, estimate them, etc., and help with the whole planning. So I'm really happy about these changes and it's really a big nice change that is currently working really well. The future scaling of these teams is really to have everything structured in sub teams and scale especially vertically. So which means more areas and especially also more members per team and also horizontally to have more teams in the near future. The hiring goal for this quarter is four and another one who was already hired in Q2 is starting on the 15th of August. The time is really to have one more team and two more hires for the rest of the team. Currently our pipeline and that is something we are very happy about. It's really full so we had without any promotions, without anything just 300 applicants in like two or one and a half weeks or so. And we are trying to get through them and thanks a lot to Nadja for all her help on that. Something that I also want to share this time is really our knowledge sharing and that is something I can really not, you can't overestimate this. I think this is helping us a lot especially in the hiring area. So the front-end team and this is something I'm very proud of is writing a lot of blog posts about a lot of topics that are about front-end stuff, but also about global topics, about culture stuff, etc. We have a lot of people on the front-end team that are giving talks, so just in the last plus minus two months. Philippa, Fatih, Clement, Vinny, Lukas and myself are giving talks around the world and it's really easy to find a front-end team member near you. And the big thing about it is I think it's great to go out there and share what we have learned because personally I've also learned a lot in the past from other people who shared stuff. It gives you a huge motivation boost to simply meet people who are using our software to hear what they like, to hear perhaps what they don't like. And it also helps you to reflect and step out of the game sometimes. It exposes a good level of and I can tell you get a lot of love back from the community most of the time that they are really happy to tell you how they're using it, etc. And the big plus, as I already said, is in hiring, so I've heard a ton of times in our interviews that they had read this blog post, this blog post, or they have seen Philippa on this conference or have seen Clement talking there, so that definitely helps. So just as a sharing to other teams. Q3 OPRs is really focused on one hand on performance. We are currently defining those ideas to have 10 overall improvements on the performance side for the front-end and five improvements for the discussion team and to integrate 10 UI components by the MDP team. So that means really boosting performance and especially productivity. We think that by using the reusable components that I'm getting to now, that will really boost our productivity in our whole pipeline. And that's the main major part that also happened now and was finished in the last week is really the Bootstrap 4 conversion. What does this mean? We can now use Bootstrap View. So this is a base view component library as we don't want to reinvent the wheel and start creating a little button alert boxes, etc. We really want to reuse the library and concentrate on building GitLab stuff as mentioned at last time. There were some obstacles during the Bootstrap 4 upgrade as it was a really huge framework upgrade on, I think, 700 to 800 different routes. We have learned a lot of it and really to identify for the next time we're doing such an upgrade that to involve more people from other engineering teams and have it really also on more public instance so that we can actually test and see stuff with more eyes on it. There are also here the link to retrospective notes so you can also have a deeper look there. The next big steps are now to get to a first iteration of our GitLab UI components. The idea is to take three of the view Bootstrap components and three of our already existing and I think we have around 40 or 50 existing view components that we already have built ourselves. And we now want to bring them to a next level so that they are self-documenting, that they are fully tested on the visual regression library, etc., etc., that we have a really nice workflow. And that is now the next big step here. The idea is really to encapsulate the view Bootstrap components. Why would you ask? Why are you asking? Because we want to have an abstraction layer in there where we can set our own defaults. If it's necessary at some point to simply take, for example, the model that was integrated by Bootstrap view and exchange it to with another model library or another model component. So that we have really, that we are not getting the situation from the past that we have suddenly a couple of drop-down libraries, etc., we are really then able to mitigate such changes and have a nice way for architecting those things. The idea is to encapsulate them, expose them, self-document it and then publish them through a GitLab UI MPM package. And this MPM package is then reintegrated and re-imported into GitLab CE and EE and then those components can be used there. And we want to have a really, and we're working very closely with VX so that we find a nice combination and a nice merge between the design.gitlab.com and our new components, our new dynamic components so that we have something where every developer from the front-end team can go have a look, okay, this is a progress path, take this code, these are the attributes, that's what you can use, you see directly, you can play around with it, you paste it in your code, you have snippets for it and off you go. And this will definitely boost our productivity in our whole development workflow. If you want to follow along, please, that's the link Clement has posted is the epic about UI components, feel free to get in touch with us. Clement is leading the whole UI component part, but yeah, get around in the Slack channel and ask questions if something comes up later. And now I'm going to hand over to Andre, who actually has done and followed up on one of our biggest topics in the last week, which was the merge request refactoring merge. So Andre, please take it from here. Thanks, Tim. Hi everyone. So I'm going to try to summarize the current status of the merge request refactoring. So first of all, let's look at why we did it. We've covered this recently, but I want everyone to be on the same page. So the overall goal is to improve the performance of the merge request page. We're having slow pages in large merge requests and the way it was built was the old architecture using Hamel and we wanted to have a more modern stack behind it. That will allow us to create more interactions and more manageable updates to the UI provided by the view update. We also want the ability to build more complex features with view with less effort. One of the examples of that is the batch comments slash reviews that we're going to build right after this refactoring. And more than that, by having this very important components built with view allow us to then reuse them anywhere we need them. So that is also a very good point to why we did this. Now, one of the things that we can already share that we've learned and we've already shared that on the last FGU, Tim already mentioned this, is the big learning is that we're never going to approach a refactoring in this way. This might sound obvious now, but going through it the way it evolved, we reached a stage where the merge request where we were working on this had 559 commits, 200 plus files changed, and you stayed open for four months in 13 days. This means that we were dealing with daily conflicts with master and reviewing such a bulk of work was only possible in turns and it made review harder. One of the things or some of the things I've already learned that we know how we can do better and want to share with you all is we want to whenever we tackle such a big refactoring in the future, we want to make sure it's possible to continuously merge the master from the start. That means that we'll be developing the new code base in parallel with the old. We'll be establishing new routes while developing and then hot-swapping when done and a very key factor to allow this will be feature flags that would allow us to slow roll out this to the GitLab team first. We would catch the majority of the regressions kind of like the Pareto principle 80% with 20% effort before we made the feature general availability. And then after that is been prepared we will remove the deprecated code. So now we want to share the results so far what we've seen with the with the performance tools that we've been using and I'll tell you the whole story. So, first of all, the notice that the most noticeable change that we've had is since we removed the bulk of the contents of the page which was the tabs to be loaded later asynchronously. The first visual change on the page has significantly become faster. This means that we've deferred the data loading to after the first thing and the site speed tool that we're using to track this instantly notice this when we're deployed. So we noticed around 65% of improvement in the first visual change on the page. This is important for them to read the description. Of course, we know that the use cases with these pages you goes for for the description of the merge request but also to the discussion so there's there's too heavy to two weights there. The other metric that was also improved slightly less but also improved significantly was the the fully loaded page the way the site speed measures it. Moving on. So, here we have a dashboard that shows the overall the metrics for performance for one particular merge request page which was a significant merge request that we had several complaints in the past by using the Hamel it was taking over 15 seconds to load. And we're now getting a first visual page at three seconds. This is a significant performance on the first visual change. But where we're seeing is that we're still have a lot of performance issues after the page load specifically for medium to large merge requests that we're still tackling and I'll be covering those issues right after this. So you can see this right there on the on the right hand side there's a relative to last month. So pretty much all the metrics are significant to this page improved for the first impression which is incredibly important for the perceived performance of this page. So if we could go back to the slides now. Lucas has also created is tracking the evolution of the document size on the merge request page, and he shares here the evolution of this we can see how the load in kilobytes is becoming leaner and leaner which is what we want. But this is only part of the story. And we're we're completely aware of that and we are focused focusing on improving not only these metrics are the first pain, but the overall performance of the page. That's the one. So, like I said a lot of work still to be done we are aware that once we release there was unfortunately quite a number of issues, particularly due to performance, specifically with large merge requests. So several situations the experience is not yet acceptable. And we've been labeling them as so specifically with larger merge requests there are situations where the page with would lock the UI, because of the complexity of the doms of the page. We have been signaling all the regressions in features that have happened to the to the refactoring and we're doing our best to prioritize the fixes to go into the 11 01 final version. So the labels that that we wanted to for you all to be aware is we're labeling every issue regarding the merge request refactor and regressions with the label MR refactor and the evolution so far we've had nine as one, which is a blocker severity regression. There are closed and we have one still in the open working on that as we speak. And just to give you a highlight of how focused we are in performance. We're tracking performance specifically making incremental changes to the architecture of the code of the view the view components to make it even faster. If you want, you can follow the board evolution at the link. And just looking ahead, there we have a lot in store and a lot planned to keep iterating and keep improving the results of this page. So first, we have already a planned upcoming feature that which is based on this refactoring and it's allowed by this refactor which is the first iteration of the batch commons for 11.2. That's the issue. But also we have inline a bunch of performance improvements. I'll just cover a few for, for, for, for example, we want to defer the rendering of items further down the page so if we want to we want to render the first initial comments, the first initial diffs right away, but we can defer the rendering of the item, the items down the list to later moment. We are considering also putting virtualized scrolling in view so that at any given moment we would own have the entire set of elements on the DOM and we get to have a lighter DOM which will benefit and everything from scrolling to reaction in terms of, of interaction adding new comments and everything. We also want to optimize all the libraries that have been causing some issues regarding rendering times and locking the reflow of the page. Specifically, one of them is the auto sizing text areas code is causing a bit of issues on that front. And one of the last improvements we want to make you want to transition from using JavaScript to position things on the page, which is for now necessary to, or wasn't until now necessary to implement the sticky behavior but we can refactor that into CSS that's also on our list. And if you want to read a bit more in detail the performance analysis that we've been doing on this there's a write up document linked at the bottom of this line, please read it. And now there's some questions, Tim. Yeah. Andre for the summary questions. Next time I will bring some elevator music. Yeah, then I will try to add one more thing is really that the performance thing is. It's, it's, it's fantastic for us for to have now everything to bring it to this big step that we have it now in view, and especially being able to reuse stuff between issues much request and even the web ID so there are a couple of components that are reused. And each improvement that we do will also benefit will be beneficiary for all those areas so that's a big step for us and coming then in with the GitLab UI component libraries that's that's really awesome to see and looking forward to work with those things. Yeah. Tim, I just addressed the question that flip has in the chat. She asks the performance in the first release candidate before regressions were fixed was actually decreased. I'm curious if we have metrics for that and if you plan to make them public. So, the metrics have been gathered and we will look at tight speed, looking for that and we do plan to have more concrete summary after all of this is done. So we will of course share everything with the public we are running a retrospective on this as well. So, every every metric that we have you will eventually to hear about it. Watch this space. So, in slide four, we will most probably also make a blog post about our findings. Yeah, exactly. So, and share share that with the view word. I will also share the amount of regressions and why they were added because I thought this was a great update but I don't think it actually reflects the state of what we have right now. We have more than 20p ones, not just nine as one. So I'm wondering, do we plan to actually share publicly everything that went wrong, and what you can learn from that. Definitely we have that first of all on on a board. We have also retrospective we had even close talks with by example make about how we can have a best better test coverage on those things and also, especially bring this in a not so big step way to to the public. And really because as this was a topic that was developed over so long time that yeah, we definitely have things that we can improve and learn from that also from that side. Thanks. Cool. Then I'm closing this one and I really want to thank everyone on the front end team. Thanks everyone especially also Fatih for doing and pulling through this much request refactoring. Thanks Andre for your first FTO update with me and thanks to the rest for jumping in and helping on a much request effect and all the other topics. Everyone else have a nice day. See you soon in the team call.