 Hello everybody welcome to the engineering functional updates. I just wanted to start by Highlighting what we've gone over this year so far. We started the year with a lot of new people So over the last couple months, we've added 10 new team members to our engineering team in a variety of roles everywhere from production team to back-end front-end so Welcome all especially all the new members and everybody's sort of gotten up to speed really quickly It's been really exciting to see. Yes, Jarka did also start. So sorry about that. So we actually have 11 people So let's celebrate all that has been shipped in the last couple releases from 816 all the way to 9.0 And this list is just a sample of the features that we accomplished everything from subgroups To squash and merge and chat ops and all sorts of features I can't even go into all of them, but I think it's been an incredible effort a whole team effort everything from UX UI to To the build team to back in front and all coordinating to get all this through so I I think it's a reflection of Our ability to collaborate and it's just been awesome to see and I think 9.0 was sort of a culminating moment for us It's a company because we were able to ship so much and So many great features that enterprises and other users really need and appreciate So I want to say hats off to everybody for all this is not even a complete list just a sampling We now have a lot of performance data I know one of the biggest complaints about get live is that it's been slow and we all hear that we know we experience it every day We have performance data Both macro and micro benchmarks now to allow us to drill in to figure out exactly what's going on and how to fix it and You know, this is just one graph from the issue the issue tracker Issue dashboard actually that I think Felipe and Jarka Worked on to optimize it and you can see that we started out being all over the map and now we're really fast under a second or so So I think that's been a huge improvement and everybody on the team has learned how to look at this data and really Do something about it. So Overall what we've seen is a significant reduction in the database load on get lab calm And that's going to help our customers as well because anyone's using that scales is going to be able to take advantage of the benefits We've put into the product So one of the big things I want to highlight that you don't necessarily hear about in the blog post But really have impact us today are things like the CI runner long-polling So many know that we run many runners on get lab.com either from our shared runners or the community And what these do they pull our system quite frequently every couple seconds for new work So every time we asked for new they asked for new work We go figure out there's something some build for them to run and then they go do and that's a really expensive operation We're hitting the database all the time and so one of the things we've changed is that now it doesn't Not muted red. Oh, sorry and Now we are able to leverage Redis and make this a lot more efficient. So hats off. This is a long effort that we finally got in I know The second thing is that York's been really working on really hard is supporting reed slaves for the database So before we only had a single database that was Handling all our load and now we actually have two slaves that are actually distributing Reads from the database. So that's taken off Some hot spots in our database and I think it's going to lead to long-term improvements Production team has really helped optimize the fleet. I know a lot of you have talked about get get push and pull is being slow And then now they're much faster because one of the big reasons is we've they've moved around a bunch of things They've converted from the old is your classic environment to the the resource manager of environment, which is the newer newer and and faster Way of allocating machines and that has I think led to a significant improvement in people's experience The developers on back in front and have done a lot to just optimize some of things. So a lot of queries were really inefficient We're starting to tackle them there's more and I know and Excited to see what happens there Get only team finally got off to a great start with the Sierra shipping their first version in 90 really the goal here is To make the get access Fast right now. It's really slow because we're running on a network file system and If you look at the low-level calls that it makes it's really expensive to do a lot of the stuff over get at scale Over a network file system. And so if we bring a lot of these operations closer directly on the local file system And just for example feed a diff back to the client then We can we should see a significant improvement. So I'm really excited to see all the improvements going on there They're really hitting their stride now and and and offloading some of the file system calls into get a leak Overall, there's been a lot of different things that teams have achieved and this is just again sampling of the things that I'm really excited about support is feeling good because they finally have a well-defined process to escalate to Development so before it was a lot of times bug reports would come in and they weren't sure who to go to who to talk to Now they can add labels talk to the product manager and so forth and they've got a great issue board that they review every month Which which bugs are affecting people most and which ones they would like to see addressed And I think that's really been helpful and providing feedback The UX team has got some great recordings. There's a link there about navigation the plan they've they've they've actually recorded a lot of people using git lab and And seeing what kind of struggles that they've encountered and got a great plan on how to actually address each of the points that they're seeing there production team and after the outage in January we Took a step back and focused a lot on just getting the house in order and getting the streaming database backups working so hats off to Pablo and his team and and You'll see a lot of talk about walley, which is the backup for the streaming backup replicator and that's going to a different cloud provider Our test we edge teams done a great job of actually measuring like what is slow in our tests and a lot of our developers Basically run into this every day. They submit a merge request they wait for tests to pass and Something that can hour an hour plus to just get that feedback and now we're actually measuring it so that we can Do something to improve and reduce the time so that improves our productivity one thing that Remy his team pushed out recently is just we noticed that the the parallelization of the test was not that effective and Camille helped there to push some data back so we have this feedback loop So that when we know we have a better way of balancing We know how long each of the specs run we can actually Better allocate how we group different tests together. So I don't think that will be really helpful going for We've got a great community there are submitting merge requests all the time and we've got a lot more coaches now to look at these community merge requests guide them Coach them into completion. So, you know, if you look at 817 We had about 60 or so merge requests come for the community now I know we doubled that and so I think this is a great force multiplier because suddenly we're getting Not just our team internal team, but also you've got a lot of people externally helping as well We've made a lot of process improvements Now the mantra is if you want something talk to product and then no work with the leads to figure out What is the relative priority? Of the features the bugs and so forth. So I think there's there was some confusion before some people were pinging the leads Some people were pinging me Really comes down the product I'll be talking the product too to see to identify the things that I think need to be addressed to But I think in general that's a great single point of contact. It's more work for product, but I think they can handle it we've introduced Trainee and trained the lease managers now before we sort of threw people in the deep end for handling all these release management tasks that have had to happen to deliver A package on the 22nd and I think this has gone really well now. We have People solely responsible. We also have people in the wings Learning from the current release managers that that they can take on So I think this has really helped spread the knowledge of how do we actually get something out the door every month On the one big change we've made in our process This past two months is that we actually are now doing a feature freeze on the seven Then there's something that DZ Proposed and I think it's gone incredibly well because now code gets merged on the seventh and we focus on Stabilizing and fixing all the problems that come out so that we're not Shifting stuff out in the 20th and having no time to iterate and improve and solve the problem. So customers get a better more stable Product and that's really the end goal um one big thing we've changed this year is that we've gotten rid of the whole idea of a performance specialist we've Made it clear that performance is everybody's problem So all those queries and all the slowness that we see Really have to be pushed down to the the owners of the future. So for example, if you're working on Improving like real-time this we really all have to all have to be cognizant of what does that mean? And how do we actually address the scale build and performance issues there? Um concerns going forward, uh, you know, I know we're getting more more questions from customers And we've just got to make sure Our support team is equipped to handle all that there's a lot of new stuff that comes down the pipeline And a lot to learn. So if you've ever been on the support call, it's a lot It's a tough task because you could be asked about anything And all the new features we ship out, you know, a lot of times we don't have time to to learn about it Until customers are running into issues. So it's it's a challenge to keep up with the pace of our development And also a challenge to answer questions that we may never have seen before Our times to The test build and deploy our entire package really need to be shorter. I think that's one thing we saw Early in january we had some struggles and getting stuff out the door in in december Uh, so if we have our tests we down to let's say 10 minutes That could be a huge game changer if we could build a package in a few minutes That would be huge and then be able to deploy it really quickly would help us accelerate our our productivity and Our ability to iterate and improve things We have a lot of work to do on the user experience front and this is not just the ux team But this is kind of in general for everybody to be aware. You know, this is a great blog post by GitLab fan about try to go and import Repos into git lab and this is what he had to configure. This is the error messages This is in his whole experience and things like that We all have to be really confident of because it's it's Something we all touch every day and something we all have to think about how we we can improve And my last concern is just our costs of running git lab.com are growing, you know We're paying more for storage bandwidth runners and all that and We're trying to manage it by at least cutting some things we don't need but long term we're going to need to address this by monetizing git lab and adding features to help Charge for build minutes for example charge for storage and and so forth But we're we're still talking about that. We don't have We still have you know, we're iterating on that plans, but if you have any ideas, there is an issue out there Hiring still hiring for a number of roles. We go to the jobs page and see a lot of these things If you have any questions about any specific role feel free to ask So if we're going forward for q2 again, we're going to still continue to focus on shipping more enterprise related features 9 1 we're slated to push out this We used to call service does its customer voice. It essentially allows customers an email and Email an issue and that issue will be translated into the actual git lab issues We're working hard on disaster recovery related to geo And a number of big customers have been asking for that and it's I think a huge differentiator Another you know see it related to ICD just being able to schedule things so that You can say I want to nightly build for example and a lot of enterprise customers depend on jankins for that today ha high availability Just being able to run git lab And take and and and tolerate one machine going down That's a huge differentiator too because we talked a lot of customers and they struggle with this because they need to If they need to upgrade their their instance or git lab or github instance They have to take downtime and that's something admins are really sensitive about We're really going to focus this quarter on just making git lab.com fast and reliable because that will help everybody else It's been one Complaint we've heard from a lot of customers that get lab a bit slow and we we get that I think giddily is going to really game that make it is going to be the game changer towards that Again, but that's not the only thing obviously there's a lot of work on our end to just improve our database queries the sequel queries that we're making How we do things remove things that aren't really absolutely necessary and so forth So that's really a high level summary. I can't touch on everything that's going on. There's too much going on, but Any questions? Let's see here. Yeah, probably have three databases. Yep Customer voice is e only. I think yeah, it's e premium Our customer is complaining about the speed of e s e p. I think They're not maybe complaining about the features per se, but just generally the whole experience like a lot just their day to day experience And we've heard from another customer a number of our key customers that that is one of the big Things that they struggle with at git lab Is arm helping us reduce our costs? Problem if you're on the call, maybe you can speak to that I believe so but maybe you can give a Yeah, I'm on the call Not arm itself So the moving to arm we use it as an excuse to reshape the whole fleet Before we were with the only 20 hosts that were quite large Now we have a lot more hosts, but are quite smaller So it is helping reducing the cost but not for the moving to arm itself But for the change on the shape of the fleet I don't see any other questions If not, I will see you all on the team call. Thanks very much Thank you