 That would be great. Clicking in my browser, and it looks like we are streaming. So it is the top of the hour, so I will get started. So everybody can take a look at the Google Doc. So the first question is, why are we doing this on Google Hangouts? I don't know the exact reason, but PeopleOps has asked us to use Google Hangouts and YouTube instead of Zoom for the retrospective and the kickoff every month. So if there's any further questions or why we're doing this, please direct it to that Slack channel or create an issue. I think that's the best place to mediate that. I believe that there were some problems with streaming, but I can't be totally certain. So I don't want to misspeak. But please go ahead and direct your questions and comments there. Sean has posted a link to this retrospective for the plan team, so he's encouraging other folks to do the same. And I believe that there is a handbook process that all teams should be doing that anyways. So if nothing else in terms of housekeeping, I will turn it over to Alicio. Thanks, Victor. So previous retrospective improvement task, we had a problem with the server Geat 0.3, which was breaking all the production deployment last month. Hamhad found out that there was Azure extension called Linux diagnostic that was constantly installing this broken package. But yeah, now we move to GCP, so this is no longer a problem, but it was fixed even before moving to GCP. So next point is Robert. Thanks, Alicio. Not too much to update here as work on this kind of got sidetracked during the summit. We're still working towards adding feature flags for pretty much all new features as the plan. Eventually, we're going to start with the batch commenting feature as a testbed, which is now scheduled for 10.4. York has begun documenting kind of the process and the need for these feature flags and merged the link to there. And I think that's all we've got for now. Tim, I expect you want to be out of time. Sure. So that question regarding the problem with the scheduling has been addressed. Thank you for whoever linked to that Slack conversation. And Stan, you're up next. Stan, are you on the call? If not, does James Lopez want to jump in? Or we might need to loop back around. So Aziz, why don't you go ahead and then we'll if Stan comes back, we can. Yeah, sure. So this is what we're going to call this month. But for the last year, I guess, we've been working towards using no more NFS for Git access. And today, we unmounted most NFS drives, which is usually used for milestone, at least for the Git elite team. But I would say for the whole Git lab, as one NFS drive used to be able to take the whole Git lab down, and I want everyone to be aware, this will also impact our customers from the next release forward, they could do the same. Because we removed the last feature flags, making it available for the whole community. I don't see Stan yet in the participants. So Eric, I'd like to end it up to you. Thanks, CJ. And that's a huge milestone. No more NFS is great. So my item was just to kind of celebrate that we took a learning from the CRETE Summit in 2017 and applied it to this 2018 summit. We did some pre-release capacity planning to take into account the lower bandwidth that all teams would have. This happened, I believe, back in July, and that seemed to have a positive impact. There were less fire drills, and there was less to sort of try it off because it wasn't fully packed to release to begin with. And now we want to make sure we do this going forward for future summits or future events where we might have dramatically less capacity than normally. So on to what went wrong. Bob is not on the call. James, if you're there, maybe you want to speak to this. I think James also dropped out of the call. So I can speak about this a bit. So yes, we had a security patch that we couldn't release in all of the RCs. So we had to keep doing post-deploy patches, which was not ideal. So some questions there. Should we have takeoff apply post-deployment patches? Actually, we probably should want to consider how we can change our processes to not have to do any sort of post-deployment patches because that has been proven to be problematic. But we are moving towards more of deploying with chat So we'll see how that will go further. Stan is up next if he joined. And he didn't. On the live stream, I think Hangouts ran out of seats. All right. I don't know, I can read at least. So we had priority one regression for 11.2 for LDAP users. And I'm just looking at what the annoying regression was. Incorrect commit count per push and push events. So I guess we need to do something about that. Eric, you're up next. Thanks. This was a really minor thing. But we didn't bump the version number. So production read RC. I think it was 10 or 11 for about a week. So bookkeeping was something that we want to make sure we do well. There was a proposal to hot patch it, but we did make the call to just manage risk that we weren't going to do another deploy with people getting on airplanes just for a version number. I think that was the right call, but something minor that we should be able to take into account next time. And then I didn't know back to Stan. Yeah, I added that. Let's see if Stan's here. Stan, I can see you. Can you speak? I think I can hear you. Yes. Yeah, can you hear me OK? Yes, we can hear you. Yes, Stan. Yeah, where do we start? I mean, I get to go back to the, I'll just start with what went wrong. Yeah, I think the two main things of the LDAP one was annoying people were not able to clone with LDAP enabled because it was a regression in there. And everybody was on a plane, I think, at that time. And then the second one was just a bug that was around for two weeks. And then we had, I think, Mark Fletcher triage, like 10 other issues that people said, hey, this is a problem, and basically closed all these other issues, but nobody was able to fix it in time. So again, this happened, I think, that the second problem happened two weeks before in an RC. And I don't think anybody hadn't got around to fixing it. But I mean, part of it lends itself to the capacity that we released on the 22nd. And then we had nobody early around before that to fix important issues. So that's all I wanted to bring up. But what went well, I want to go talk about that. GCP migration, I think, generally went well. I think we rehearsed over and over and found a lot of issues. And in the end, it went smoothly. There were a bunch of things that didn't get migrated over after the geo migration, but we were addressing those in 11.3. So for example, the default branch for repositories, we had about 200,000 repositories that didn't have, if somebody changed the default branch from one thing to another, it wasn't replicated. But we went back and just resinked them. So I think that has solved. And we're pushing out a fix in 11.3 to handle that. How can we improve, ZJ? Yeah, somewhat obvious, I guess. But just to mention it, not everyone that wants to talk joined the call or some dropped out and couldn't rejoin because we hit the limits of 50 participants, apparently. Well, I'm not sure if we can either pay Google to have more participants, or if we refer it back to Zoom, or what the reasons were to use Hangouts. But it's something we should investigate. Yeah, Victor, is this something you'd be able to work out with people ops? I mean, we tried this experiment. It seems like the live stream is presumably working, but we're not able to collaborate effectively. So maybe you could work out with them defaulting, going back to Zoom, or maybe raising the limits in Hangouts. OK, we'll take this action right now. And if nothing else, I'll end the call. Anything else from any folks? All right, see you folks in the kickoff, I guess. Bye now.