 Great, welcome everyone to the Functional Group Update for the build team. I decided to give a short update this time compared to the last few times. So continuing on the previous Functional Group Update where I mentioned some introduction of build team training, I wanted to give a short update on how that is going. So we did two trainings so far and they have been a really great success. One training was given by Jason and it was related to People to Cloud Foundry. More on that a bit later. And another training was about Terraform given by Ian and that was also really, really useful. So we are continuing with that team. Next week, we'll have DJ present helm charts and everything around that, I would say. I will update the team on when that is actually going to happen. I will announce it in our team meeting. But yeah, I would definitely recommend everyone to take a look at the recordings we have there. Another thing that I think is very useful to mention again is the fact that we can now build packages and images on demand, meaning if you are a developer, if you are not a developer, but you have access to GitLab CEEE or Omnibus repository, you can navigate to a pipeline, click a button and get the package ready that you can actually use to test your changes or someone else's changes for that matter. We didn't receive that much feedback on the Docker image specifically. So that would be very much appreciated from any engineer. If you are doing any sort of development, please try it out and tell us how we can improve it further. The few improvements that were requested, we already implemented. So hopefully we can iterate on that further. I mentioned the team training and Pioto Kuala Foundry. I'm super happy to say that after a lot of work that has gone into transforming the infrastructure and transforming the releases there, the releases there are much simpler than they used to be. So we had another team member doing a release. So no longer Jason alone knows how to do this. And we received some great feedback there. And now I can say that I'm expecting everyone to be able to do these releases just by following the documentation. That doesn't necessarily mean that these releases are trivial. They still take two days to do, which is still quite a lot. And they do require some manual steps still, but iteration, right? We are going to automate some more steps there. And at least the part that we can do will try to speed up. Another thing that I forgot to mention in last functional group update actually was we now have a full overview of all the licenses in the packages of all the libraries that we are shipping. Fun fact there is that we have listed 1,209 libraries with licenses inside of the package. So that includes all the Ruby gems that we have, all the node packages, everything is now noted down and has a license attached to it. Another thing that kind of slipped was we are building some go binaries there and we kind of forgot to include their dependencies or their licenses. So there is some improvements to be done there. Together with the product team, of course, who led this effort, we released official Helm charts. DJ was doing some amazing work there and we received a lot of great feedback from the community and we are iterating further on that. And while working on PGHA, we shipped PG Bouncer. PG Bouncer is only a load balancer for the database and only one small part, but still a very important part for gitlab.com, which now can continue using our package for PGHA or the part of it at least. And I also included the list to the merge requests there that got merged in the meantime since the last Functional Group Update. There are a lot of maintenance tasks that were done there, a lot of small bugs that got fixed. Check it out if you're curious. There are however, a couple of concerns that popped up this month. We knew that PGHA was complex. We knew that since we started, the fact that gitlab.com still is trying to figure out how to do this properly is indicative of how complex that is, especially in the current architecture of our application. But because that is still a bit of a moving target, we keep moving two steps forward and one step back. So this, I actually just wanted to say that this is a bit more complex than we even imagined. So running it is hard, but shipping it is a completely different ballpark and making it simple for our users as well is a challenge. So we're continuing working on this with the production team and we'll try to split as much as possible so we can actually ship some parts in each upcoming release. Another thing that is a concern is that even with the license check, we have a license offences, I would say, where for example, we've been shipping Python for some things that we need inside of the application and one of their, one of the Python's dependencies was basically using a license that is conflicting with Python itself. So unfortunately, one of the customers had to ask us about this and we've reacted quickly and fixed it, but that actually proves a point that we need to automate more and we are going to work on failing the build automatically if we didn't specifically whitelist the dependency. That's going to create a bit of a problem for us, but I would rather revert a feature than ship something that could potentially create problems for us. Number of issues in our omnibus GitLab repository is increasing. We're trying to tackle it as best as we can, but with all the things that we are working on, it's getting increasingly difficult to keep up. What's happening right now is we are, everyone in the team is pitching in trying to answer an issue here and there as the time permits, but we close two issues, five pop-up. That's awesome by the way, that means people are using GitLab, but we would need, we definitely need some help with that. We have an issue where we are discussing how we can be more efficient in this, but so far the number is just going up. And another thing that I wanted to mention for everyone to be aware is we are not really testing MatterMos changes anymore. We just cannot keep up. So what happens is we upgrade MatterMos in the package. We do a quick test of does the package build? If it builds, then we ship it. We depend on the MatterMos team to try out the changes. And for example, after MatterMos releases its version, we ask them to test our RC package. Sometimes that even doesn't help. And we also have a number of issues increasing there with MatterMos specifically. And we are trying to figure out the way how to efficiently tackle this. We are asking for help from MatterMos, but I just wanted everyone to be aware of this fact. We do need some help here, and specifically we are stuck with automating Azure images. I'm not sugarcoating anything, we are literally stuck. We don't know how to move further. And I've asked for some contacts within Azure to help us out with the specific problem. The issue is we build everything, the machine boots up, we can SSH into it, but for some reason, Azure panel is just saying, yeah, the machine has a problem, so the Azure panel is reporting something that we cannot actually see. Baloo actually pinged the community, pinged various places where Azure, let's say, lives, but we received no feedback so far. This is, the next thing is not specifically help, but I would like to inform everyone that we have documentation on how Omnibus GitLab package is built and how the infrastructure looks. So it would be really awesome if you could take a look at it, see if something made sense. I hope everything made sense, but I would appreciate if we can improve this further. And finally, GitLab QA, again, is an awesome project. I urge you to check it out and urge you to help everyone get this project into our daily workflow because GitLab QA already saved us a couple of times this release. GitLab QA tests installation of CE and EE, upgrade from previous version to the current version that we want to ship, upgrade from CE to EE, and it also does a bunch of feature tests where it goes and clicks around the interface of GitLab. And it also caught some regressions there. If everyone pitches in and we get to a point where this can be a part of our merge request pipeline, that would be brilliant. It would save us a lot of time. So check it out. It's a really cool project. And I just want to give a shout out to Djegos who is being a trooper there and powering through basically single-handedly. So awesome work, Djegos. Plans. Well, plans we have a lot of. So we are trying to automate as much as possible. So all the cloud image releases are scheduled to be completed. How and when we'll see, but at least we are being ambitious there. We are also looking into building a better infrastructure for our Zeus builds. We currently have a snowflake in our infrastructure and that is a separate runner in a separate account that only runs this one thing. And we already ran into some issues where we had to rebuild it. So Jason is going to work on creating the proper container for us to use within the rest of our infrastructure. We're working with Terraform still and we are going to add more things to it. The next up is AWS Terraform configuration. We currently have GCE there and it includes the runner by default. So if you use Terraform, you can easily boot up both GitLab and the runner within GCE, which is pretty awesome. We are starting to plan how we are going to tackle some of the maintenance tasks and some of the technical debt issues that are growing very high. Specifically technical debt that is right now in my mind is the internal cookbook for home numerous GitLab is very large. And this is creating some maintenance problems for us because we have some, even some duplication with some services that got added last minute. And one other thing that's also in my mind is that we are still reactive on the libraries that we want to update. For example, we get pinged by outside contributors saying, hey, could you upgrade library X? I would rather see us do that automatically. Still working on it, trying to figure out how we can make it better. And last but not least, definitely not least is cloud native. Cloud native, a hot topic for everyone, I guess. But that means a lot more of new work. And that also means a lot more work with more teams, which is awesome. But we are still at the very beginnings of it. We are talking with production. We will be talking with everyone who needs to be talked to, talked with, where we are actually wanting to create a single click and enterprise GitLab installation. So basically you would be able to go to AWS and click on a button where you would install GitLab in AHA configuration, which would be brilliant. Again, this is a moonshot for now, but reachable if you start working on it. We are also doing a container per service for this, I would even say third installation option. So we will have our omnibus packages like we currently have. We will have our Docker all in one image, but we will also have a cloud native Docker images, which I know will make our production team very happy. And we are, of course, working on getting our charts that we released, completely production ready, meaning we want to be providing all the documentation for backups or upgrades, everything that you need to do to maintain your GitLab. Okay, yeah, thank you for listening. I'll take some questions now. Yeah, dependencies, we have a lot more by the way, but let's keep it 1200. Do we have any kind of PMIR support with Azure asks Jim? I think we do, but only related to the infrastructure as far as I'm aware. I don't really know whether they would be able to help us with this because this is a completely different support contract, I would say. Can't get to that Azure issue. I'm not sure why that issue is confidential, actually. I'll have to check. I'm not sure whether we mentioned some project IDs there or something, but I'll check that and sanitize if necessary and open it up. Yeah, thanks for noticing that, Jim. Any other questions? Okay, silence. Again, thank you for listening and see you at the team meeting. Bye.