 All right, let's get to the top of the hour. So welcome everybody to the functional group update for GEO. I'm Stan Huth, intern manager for the group. Just a recap, I can get my slides moving here. Hang on a sec, here we go. This is in a nutshell what GEO is. Sure, many people have seen this before, but I'm just bringing this again in case we have new people. GEO's responsibility basically copy all the data you have from the primary to the secondary. And currently we have a setup right now going between Microsoft Azure in Virginia and Google Cloud Platform also in Virginia. So this is actively running today. On the team, no changes since last time. Same group of people, same great group of folks to work on this product here. We have made one change since our last update. Before we had really only one GEO maintainer, obviously GEO is a subset of EE. So a lot of merge requests were being assigned to other EE maintainers, but GEO does have specific domain knowledge that we need. And we've made Douglas a GEO maintainer. He's done a great job reviewing merge requests. He's been on the project for a long time and knows ins and outs of it. So this will hopefully help reduce the amount of bottlenecks in the review process. We had a problem in last review cycle where four merge requests were being reviewed a week before the release. And it looked like they were gonna make it and then do the migration and lack of bandwidth review. We were not able to get a lot of those in. So hopefully this will help this, but this is a long overdue. Thanks a lot, Douglas for all your work. You deserve this. This is an honor to be a maintainer. Accomplishments for 11, there's a lot of things that we did this past month. In 10.8, we shipped this ability to push to the primary, almost transparently. Using HTTP, if you push to the secondary, it will redirect you and automatically go to the primary. Now that didn't work for LFS at the time and asked it a lot of investigation into why that wasn't happening. It turns out it actually was a bug in the Git LFS client. It wasn't able to handle redirects properly. And for this release, we've shipped the change is necessary in GitLab to support just this transparent HTTP push. It's again, as a redirect, but this also required a change on the client side. And this change was fixed by Taylor at GitHub in Git LFS 2.42, which was released last month or so. Really great work there. We identified the problem, we raised the issue and he jumped on it right away. So thanks a lot for that. It's gonna help our customers a bit too. We've been working on verifying that the references and the Git repositories are correct between the primary and secondary for a while. There have been a lot of cases where things got out of sync for some reason. It wasn't on by default previously, now it is. And there was a lot of work done by Douglas's pass release to make it production ready, really. Just, you know, fixing a lot of the SQL queries, fixing a lot of cases where things were out of sync. So you can look at this merge request for all the details there. We've had this feature on for a while, this ability to move large repositories through just really, instead of doing a Git clone, we do it through a sort of a streaming tarball. And we've been using this on gitlab.com to move some of the more problematic repositories on gitlab.com when the simple clone just takes too much bandwidth or CPU cycles to create. And so we have this ability to snapshot and move this thing over. We turned it on by default 11L. So just to be aware, if a customer starts seeing things that weren't moved before and now able to be a clone, that is likely a reason why. Tone did a lot of work this past release to make the Git FSCK run properly on the secondaries before it was not. If you turn on this repository check feature in Geo, it would only happen on the primary and it wouldn't actually happen on the secondary. So in order to verify objects are actually correct, we have this feature on gitlab right now that we'll look at the primary, but now in 11.0 and also we'll check the secondaries as well. And lastly, Mike really spent a long time chasing down some nasty race conditions and other issues with uploads and object storage that will go into 11.0 hopefully with this merge request as well. So we've been chasing down a lot of the failures we've been seeing as we're doing this migration to Google Cloud and it's been an incredible, helpful testbed to work out issues that you wouldn't normally see in a low bandwidth or low traffic site. And on top of all this work, we've been helping a lot with these Azure to Google migration failovers since the last update. I think we've conducted three or four, four actually updates, four failovers. Brett and Nick have sort of been coordinating that effort, we find new issues each time we do this migration, but we're getting better at them and they're getting smoother each time. So you're welcome to follow along. The next one rehearsal is tomorrow, I believe. Stull, where do we need help? Well, we're finding a lot of issues with using Postgres as a cue for the secondaries. So we're talking a lot about how can we improve some of the performance of our queries? Since some of these things might actually require Postgres 10, it has a lot of new features, including logical replication, including this thing called aggregate pushdowns, which basically allows you to account across different databases. And we're just looking at ways we can make these things better for people at scale, for databases at scale. We have one nasty issue that we're finding that our cue, some of the events that we're processing are being skipped due to the way that Postgres behaves with sequences. So we're looking at that. We're probably gonna need help from the database team, probably need help from the build team to see if we can ship Postgres 10 in the next quarter. So we're finding a lot of new bugs. We're trying to migrate things to object storage. And so for example, when we migrated attachments to object storage, we found a lot of different bugs relating to things that weren't quite working, moving issues, exporting projects. I think Mike found this problem when they have white spaces in the file names, things kind of gotten to a weird state. So I think there might be more things like this out there when we will need help from the development team, other parts of the team to find if there are other issues relating to this because this will help solve a lot of the issues we have with migrating attachments to Google. This other feature I've talked a lot about in the past is just this idea of immutable pass in our repositories. And we like to get this enabled in production as soon as we can. We've converted all 500 products on our dev site to using this. And I think we've ironed out a lot of issues in the past two months with this. But so I think it's just a matter of getting signed off and okay from everybody that we should turn it on, at least for a limited time on gitlab.com because I think this will solve a lot of bugs we've been seeing. So what are our plans going forward? Well, the first thing that is top of mind here is finish this Google migration. We currently have a date set with July 28th. Now we're gonna have a call probably the next day or two to figure out whether that's going to hold or whether we need to delay that date. But right now that's sort of what we're shooting for. And the three main things for the next months are just improving the geo performance usability. As I mentioned before, we had transparent push with HTTP. It's again, it's a redirect. So it's not the smoothest thing, but most people are using push over SSH and Ash has done some great investigation into how that's gonna actually happen. It's actually really tricky. We're actually proxying HTTP traffic over SSH. And so there's, Ash's has this working, this proof of concept that it actually shows this is possible. So this is really, really be really cool for people. For example, they don't wanna think about which remote they're using. They can just do git push, it will go to the right place. We have a lot of database queries that need to be optimized. As I said before, we're doing a lot of things that touch a lot of rows. So again, I mentioned that earlier that Postgres 10 might help here. Smarter ways of using logical replication might help here and so forth. And the last thing we're working on for this release is exposing more information in the UI. A lot of this information about why the sync failed or what happened to it is available in the API, but you can't actually see it in the interface itself. So we're Cushal and Gabriel are working on this, the screenshot that Hazel created right now and just making it possible for admin to see what went wrong and click a button if they wanna force a re-sync, for example, and so forth. So they have better visibility into what is happening on their geo instance. So that's really all I had to say. Are there any questions? I don't see anything in chat at the moment, but I'll open up to the floor in case there's something else people wanna ask about. Is that this is great? And I love all the links to the merge request in the FGU. It is awesome to be able to dive into things. Are there any plans to, I think GitLab pages is the last thing that is not supported by geo? Are there any plans for that? Or is that still unscoped? That's, I mean, it's on the radar. We haven't planned in the next month or two, but I think it's probably worth visiting. There's Docker registries and other thing that's on my mind. But yeah, pages is definitely something we need to look at. We sort of haven't put a priority on it because we weren't doing it for the migration, but I agree that it's gonna be important. We're finding that not having pages is sort of a pain point right now for us. Oh yeah, and you're totally right. The Docker registry might be, I'm almost certain that's more urgent for our customers than pages. And maybe I should tell why because now it's a Docker registry. Dimitri's gonna add Maven support to it. And one of the main things that things like JFrog artifact we do is kind of spread the files across the world. So everyone has quick access. So seems like a use case for geo that makes sense. Great, well, thanks for raising that. Yeah, so I think we'll probably put more emphasis on Docker registry. I mean, we have a story right now if you use a shared same S3 bucket, then it will at least work for geo, but that doesn't actually replicate automatically. Ah, that makes sense. So it's not a problem at us because it uses object storage, that's why. And then GitLab pages, it's not that case. Right. Could we use S3 buckets for GitLab pages or doesn't that make sense? I think there's a discussion about making that an artifact in object storage and then downloading the artifact. So I think it makes sense. I think there's an issue open about how we actually use geo in pages. So I'll have to link that. Cool, makes sense. Great, well, thanks everybody for your time and I will see you next time.