 It's a full hour here, so I think we can start it. We can get started. Welcome everyone to the Functional Group Update for the build team. Let me start with our OKRs, like last time, last time I did guide you through what our OKRs are and what we are trying to achieve with them. And now it's a good time to give a bit of a status update. So the first one is delivering Postgres a highly available setup in Omnibus. With 9.4, we shipped RepMGR and we are working on having the automated failover procedure shipped in 9.5. This is a bit of a stretch goal at the moment. It's maybe doable, but we'll see. We'll try our best to get it in 9.5 and if not, I'm fairly confident that with 10.0 we'll have this completely done. Manual failover however is possible even right now with 9.4. So if you figure out how to do this, it will work. The documentation is working progress still. There are a lot of discussions happening there on how to improve it, but someone is unmuted. So I'll ask you to mute yourself. I don't know who it is. Thank you. Yes, the documentation is working progress. We hope to get it done as soon as possible obviously, but what's exciting about this OKR is that we might be actually able to reach it. Let's move on to the next one. It's the same one, but a tiny bit different. Yes, our OKR says Postgres has high availability, but we wanted to finish up the whole story of highly available setup in the package. Okay, I jumped to the fourth one. So I guess I'll speak about that one and go back to the slides before. Yes, so the full story of Omnibus GitLab installation with HA means that we also need to configure the rest of GitLab to be able to run on separate nodes. And that actually made us do some technical addressing. We grew the package from a single service, a couple of years ago when we started and we kept adding more services to it and those services started talking to one another and yeah, we just kept adding stuff to it. So now that we actually have to have some more complex setups, that meant that not only us, not only we have the problem of shipping this and configuring this, it's also the users who need to configure quite a lot of things to get all these services to run separately. So we went and started refactoring our internal cookbooks to be able to build on it, on top of it. So we changed the way we are declaring services. We have some new cookbooks, some splits in the cookbooks themselves to make it easier on us when it comes to maintenance. We added priority levels for services and the ultimate goal here is for the user to only use one setting and maybe require parameters. So they would be able to set the required parameters and that's about it. All of the other ones, all of the recommended configuration that we expect, we would be handling automatically. So say you want to boot up a node that only has, let's say, sidekick running. You should be able to just say that this is a GitLab Rails node and that it should have all the services disabled except for sidekick and run reconfigure. After that, reconfigure would shut off all of the services that are not necessary and only configure sidekick. That's the ultimate goal there. Now to go back to slide three there and that is PGAHA specifically. We are also working with the production team still on this one. Right now, staging is running all of the components that are from within the package and PG is basically running in HA there. The problem there though is there is no load, there is nothing, it's a staging node. So we don't have any traffic but production team is also not far away with automated failover there. So if all goes to plan, we'll be having omnibus GitLab package with PGAHA in production on gitlab.com before we even ship this to everyone, which would be a major success for us I think. Hopefully that does come true. We're working really hard for that to happen but as soon as you have more than two people working together, it's just, yeah, scaling problems, right? Second thing, second OCR, we have deliver serious specific images and the Helm charts. We are behind schedule there. We ran into a bit of a naming discussion currently. We have four different Helm charts for GitLab and this is really super confusing. We have community contributed Helm charts which are in the official Helm repository. We have our Kubernetes demo that was used to do the idea to production demo. We have our GitLab charts that we recently announced and now the fourth one, which is going to be service specific images or Helm charts per service. So we had to do something about that and you can see in the link there what is the conclusion. Basically, we are going to somewhere down the road try to get the only one chart as the source of truth. In the meantime, while that is building, we will have to have something supported and most likely this is going to be this GitLab Omnibus or Omnibus GitLab chart that we renamed. But if you're curious more about that, check that issue. So we are picking up the pace with this OCR. We are starting with developer documentation rather than actual development right from the start. The reason for that is we want more people to be able to contribute. So that might speed us up a bit and we'll finish up the registry chart first so we can help the production team get this running in production as well. After the development documentation is done and we had some progress with the chart, I'm confident that we'll be able to pick up the pace and iterate much faster on this OCR. And the final third OCR for the build team simplify HDPS configuration. So we are starting very simple. We want to improve the initial touch that the user has with installing and configuring GitLab. Right now, this means adding the repository, installing the package, opening an editor, changing the URL where GitLab will be reachable and then running reconfigure. We want to make sure that if you already know your domain, which you should, you only have to run one command. So basically give us your external URL in the form of an environmental variable and just say upget install and done. We should handle all of the rest for you. Once we have that running and it should be very quick. I already have a merge request for that from Baloo. So I expect it to happen, maybe even in this release. We are going to change the way reconfigure output is being printed. So right now it's a very big run list or rather it's a lot of output that reconfigure puts out and for seasoned engineers or for people who had encounters with Chef, this is not a problem but for people who do not need to know about everything that's happening internally, this gets a bit confusing because if something happens, if an error happens, they don't really know where to look for, it's really easy to miss things in the reconfigure output. So we are going to try and get the separation of what's important and what can be informational, let's say. So reconfigure should be informational and at the end of the run we should have all the errors printed if there are errors or if there are an actionable item that user needs to do, we'll print that as well. So hopefully with a bit of nice formatting and I wouldn't say luck, but engineering, we'll get more useful output for the users. And finally, when all of this happens, we get to a point where we want to make sure that the address that user provided is actually reachable. This is a bit more complex right now. We want to merge this thing together with Let's Encrypt. So maybe possibly even use the staging Let's Encrypt API to verify whether the URL is reachable from the outside. But this is still to be defined. In any case, we at minimum want to make sure that you can actually reach the domain that you set when you install GitLab for the first time. This is all working progress. This is also going to pick up very fast. We, as I said, like we have a lot of progress happening there and I'm confident that it will be able to make most if not all of the diesel KR. What is else on the agenda? Well, we have package signing. A minor thing for some, a major thing for some others. Package signing with 9.5 means that all of the omnibus GitLab packages will be signed. That also means that by default, we will be enabling this for all new installations. So if you grab a new note and set up your repository for the first time after 20 seconds, you'll get a package that is signed. And also if you use YUM repository, so your own Enterprise Linux or CentOS or whatever, it will automatically be enabled for you. With the DB and base system, it's a bit more complex than that. They don't have a, let's say, inbuilt simple way of separating this package signature check with every repository. But in any case, you will be able to, if you need to check the signature of the package and make sure that the package you received is actually the package that GitLab being provided. Couple of things that I've listed here that I thought were kind of important. Package signing is going to be a part of our build process. So the overhead is very low once this finally got in. So we had to have all the support requirement satisfied. So package clouds repository that we are using needed to support this. And we had a lot of dance with security and the way that the key is provided to the build. But now it's in the build process. It's simple and fast enough. So I'm kind of happy with that. And we did have to build our own depth sign support. And for the sake of getting this in as soon as possible for us, we just set it up in our fork and we will be sending this upstream to the base omnibus project so that everyone can benefit from this change. And yeah, we provided also docs on what the user can expect to get and how they can actually enable this check if they already have a repository set up. To make this transition just a tiny bit easier, we are going to backport this change. So the package signing change to 9.3 and 9.4 for any patch releases that come after the 22nd. So after the 22nd of August, any 9.4 or 9.3 package that gets shipped will be signed. The reason for this is if you have signature checking enabled and you need to downgrade to a package that does not have a signature, you will get a complaint from your package manager, most likely it will abort the downgrade and it will ask you, can you add unsafe or no GPG check? I'm not really sure. I forgot what the name was for all of these package managers. In any case, we want to make sure that by the time that the signed packages are a thing that all customers and users, if they are even trailing behind, if they upgrade to 9.3 or 9.4, they'll get the signed package and if they upgrade to 9.5 and 10.1 in case of a problem, they can always come back without any additional flags. There are a couple of good to knows here as well. We are hiring finally. We are looking for a senior build engineer. If you have a suitable candidate, please send them our way. We would definitely appreciate it. With GitLab 10, we are going to remove Postgres 9.2 from the package. A couple of reasons. First of all, End of Life for 9.2 Postgres is September 2017. So at the exact time, GitLab 10 is shipping. We also want to make sure that we get all the nine six features that are shipped and to allow the database team to actually be able to use them. Right now they have a lot of guard clauses like if PG 9.2 then do this, else do this. We want to make sure that's no longer the case. And also, we kind of do want people to move to 9.6 as fast as possible. So you'll have to upgrade to GitLab 9.5 at minimum before you can upgrade to GitLab 10. We are still thinking on how we're going to make sure that the users are informed. We have a couple of options, but this is still work in progress and luckily we still have a full two, three weeks to figure this out. There are a couple of smaller things that we are going to be removing. Git data directory syntax. We've been carrying this over for a couple of major releases now and now it's the time to remove the old syntax. We also have, I found out recently yesterday when I was looking at something that we still have Git, GitLab Git HTTP server. And that was like the former name of GitLab Workhorse inside of our package. And this was even in version seven. So it's definitely time to remove it now. We have GitLab AWS Quick Start. We are working on that. It's being tested currently and the documentation is being written for it. It's a very exciting thing. It will allow you as a user to just dump an archive in your AWS and just start everything up very fast if you are in AWS. We are also doing some, I would say minor things compared to ROKRs but are very important. We are changing the versioning of our nightly build packages. This is to allow us to deploy to Canary automatically or at least enable people to do that. And also we have been having a lot of questions. Oh, why is our nightly package versioned with 8.1? That caused a lot of confusion and we definitely want to move away from that. We also are going to start with the production team and CI team together. So it's going to be a joint effort between all of us to get the dedicated build infrastructure for triggered builds. This is again an intro to getting QA project as a first class citizen in all our development workflows. And we have a lot of more things that we scheduled and are doable for 9.5 and 10.0 and you can check that out in our workforce. I think that's about it from me. I'll check if there are some questions. No questions in chat. Lots of prefabricated work going forward. Good stuff on the AWS. Yep, Joshua is doing a great work there. And we are kind of excited to get this out. Okay, if there are no more questions, I'll give you back 10 minutes of your life and wish you great rest of the day. Bye.