 All right. Hello and welcome everyone. Thank you for joining us today. We're very excited to have everyone on in this webcast. We'll be covering an interesting topic on GitLab implementation tips for performance stability and recovery. My name is Zach and I'm on the marketing team here at GitLab and I'm joining you all from Raleigh, North Carolina today. We'd love to hear where everyone's tuning in from. So please use the chat tool to say hello and tell us where you're joining from Before we get started, I'm going to cover just a couple quick housekeeping items. First, feel free to ask questions throughout the presentation. You can do that using the Q&A function at the bottom of your screen and we'll get to as many as we can at the end of the presentation today. Also, if you're having any technical issues, you can use the chat function to get in touch with me to provide some assistance along the way. Our presenter today is Joel, a Solutions Architect Manager who is joining us from Chicago Illinois today. And we're going to start off with just a couple quick polls so we can learn a little bit more about you. That way, Joel can tailor today's presentation accordingly. So with that said, I'm going to open up our first poll, which is a simple question of which GitLab package are you on. I'll leave this up for about 30 seconds and then we will get started with today's presentation. Alright, Joel, it looks like we've got a pretty even mix across the board of packages. So without further ado, I'll turn things over to you. Yeah, thanks, Zach. You're right. That's an incredibly even distribution of what different GitLab packages people are using almost identical across the range. So that that keeps things interesting. Well, welcome everybody. Thanks for joining us today. We're going to be talking about some GitLab implementation tips. We're specifically going to be dialing into some of what the premium offerings of GitLab entail. So we're going to be talking more about GitLab geo. We're going to be talking about how to use that for disaster recovery. We'll be talking about GitLab high availability, what it means, what the components are, how we break that out. And even an example of how you might roll this out as far as number of nodes and node sizing for different options. So we're going to get into all those different things today. And we'll also talk about a few examples of things that that I've seen over the years. I've been here at GitLab for a while. And of course that means I've been part of outage calls. I've been part of pre-sales communications and implementation planning. And so from that perspective, I wanted to share some of that knowledge with you today. So what it looks like here for our agenda is we're going to dive first into the concepts around GitLab geo. And if you've never heard of that, it's the nodes that we can roll out as a primary secondary type relationship. But they are not active passive. They are truly more active nodes. Okay. And then we're going to dive into GitLab high availability. We'll talk about all these things. And you notice we'll even touch on AWS because it's such a frequent conversation for us. And then at the end, I'm going to go through just a couple of problems that I've seen resolved and what those stemmed from and what are things to watch out for. So without further ado, let's jump into why we're talking about this in the first place. This comes directly from an example of an outage that I was actually witnessed to. This is one where we've really conservatively stated that a developer might make 65 bucks an hour. That's almost more of a low end freelance rate these days versus a fully loaded developer. So I think it's really conservative to say if you're down for a day, you're capable of losing a half million dollars for a thousand developers. Now at the end of the day, the problem here is not about how many users you have times that calculation, but really the fact that your one day of outage can actually cost you more than your purchase of GitLab. We never want this for anybody. Obviously, this is a significant thing. And this is perfectly avoidable. When there's something like this that we've seen in the past, it's because someone has tried to overextend a core offering of GitLab, something like a single server trying to serve 5,000 people or something. Something that's just not properly configured. So we're trying to help avoid that situation. Well, GEO is the first way that I can recommend to avoid that situation. And GEO is something that is often overlooked as the easiest way to improve performance and business continuity across an organization that's not a large, large size. We're not a full enterprise. So we're talking maybe 300 to 500 users or something like that. Obviously, this works for a lot lower users as well for the numbers. But you know, this map here gives an idea of how we commonly would see something like this rolled out. Now, an important thing to note is, you know, the red dot in the middle might be the primary and the secondary nodes are all around in the amber or orange color in different GEO locales. Depending on how you're using GEO nodes, they could be located right next to each other as well. Okay, so that's important to note. But the performance reasons to use GEO are quite simple. The users that are interacting with each of those nodes can now simplify how long it takes for them to get clones of their Git repository. So if you're doing a lot of cloning, if you're doing dealing with large sizes of files at all, this is a no brainer. We can reduce cloning times from minutes to seconds in those remote environments. It also helps us balance user loads because if you've got 500 users across those four locales, it may not be an average of 125 users per node, but it just might be. Okay, so you can reduce the load and the requirements for your high availability functions by simplifying a GEO rollout model. This is also the fastest path to GitLab's disaster recovery. Okay, this is what we outlined in our documentation is if you set up a GEO node, this is where you can now do secondary node failover. At the end, we simplify it further because not only has the database been sinking in the background, but it also has authentic authentication inherited so LDAP single sign on configurations now can pass to the secondary. Now it's important for me to note here, this is a manual process right this is not something where it's a click button switch over, but it is still by far the fastest way to disaster recovery when the primary node would go down. This is the simplest method to manage because all of this is in one UI it's all within the GitLab UI you can see each of these nodes and their health. From a deployment perspective you're rolling out basically for identical nodes you're taking our omnibus package and doing a simple rollout of that server and doing the same thing again and again and again, and it said that replication is kind of a one to many method. From that perspective it's really nice. It keeps things simple and you know complexity of course is kind of the, the enemy of simplicity in, in a scaling and logarithmic kind of way once you start to scale up your organization and your number of users. Well what does GEO do under the covers. Essentially, I talked about a streaming replication of the database that's essentially a mirror from the primary to the secondary servers. So that mirror of the primary going out to the secondaries is occurring in a streaming fashion, but notice that the groups that are interacting with each of these servers is actually getting a push that's proxy back to the primary to keep the primary up to date at all times. So essentially what's happening here is you've offloaded some of the user load from the primary but the primary is still taking those proxy pushes so it can stay up to date which in turn streams that mirrored content back to the secondaries so that the next person to pull on that is getting current information. To break this down a little further note there's the Postgres streaming the replication of the authentication I talked about. Also you've got the transfers of things like your artifacts and repository data with the push proxy being the only one with an arrow coming back to the left. There are more things here that are and are not supported and we'll touch on that in just a minute but one of the newer things that we've added here is Docker registry that can also be hosted on the secondary node so when you think about it from the perspective of operating CI in region. Now you're actually reducing the time to pull from your local registries as well. So what are some tips around rolling this well first and foremost roll these secondary nodes at will you don't have to have your whole design up front you can get one going get it tested out make sure your network connectivity is in good shape. Make sure it's what you want and then you can roll the other nodes at a later time so it's nice and simple and you can do staged rollouts. Failover plan I mentioned earlier this is not an automated thing. Okay, so today the primary method is hey we have an outage somebody goes in as root on the secondary server reconfigures get lab and promotes it to the primary server. There's always the issue of DNS or is your DNS pointing how are you going to fix that is that something that you could script and try to automate in case of an outage we see various things get scripted and then a manual step to approve or some people just rely on the manual steps but make sure you've got them not only defined but at hand so you're ready just in case something does happen. Now mentioned to the limitations of the secondary understand what those are things like maybe an npm registry that might be hosted in the primary that's not replicated to the secondary what are the components that you might have to look for things that you might have to point to things that may not have been synced to the secondary. That's an important thing to note as well. Using oversize nodes in case of failover, this may not be all your nodes but I would think at least one of your nodes that would be a secondary that you might promote in case of a disaster should be oversized. The reality is if you have 125 users on one node, but one of them fails you might jump to 250 rather quickly. Those numbers may or may not be a problem for that secondary node so you want to keep that in consideration as well don't undersize the secondary node. And then you can use selective sync to limit the data exchange that map I showed on the first picture, perhaps the secondary node was in New York. Maybe that's the one that needs to be oversized, but then potentially in India and China where the third and fourth nodes were, you might want to use selective sync. Maybe there's some networking concerns maybe there's some limited number of projects being worked on in those regions and you can increase the performance of those nodes by limiting how much data has to be transferred back and forth to those nodes. And that's just something to think about the screenshot here on the bottom highlights geo nodes you can see the primary and the secondary note that the secondary does come with a health check. So you'll be able to see each of the nodes from this space, see if they're up or down, see what kind of errors might be associated with them in fact in this case you can see that I've got some failed repository cloning between those of the mirror is not fully successful. And you can see all the different data that's moving back and forth note there's a selective sync setting enabled. So again, you know the primary here is mirroring to the secondary. And the idea is, once that primary has something significant go wrong with it, we can promote to the secondary server. And that again is something that will show up in the UI here but it does require manual interaction on that secondary server to do so. So from that perspective, I guess the question I'd love to ask are you doing anything for disaster recovery today I think we've got a second poll for you that just kind of talk about if anybody's using geo today or if you're doing any kind of regular backups today. Looks like some of you are doing regular backups and there is someone using geo today, which is great. So this geo is definitely our recommended approach for this now, once we get going with this conversation we're going to move into high availability shortly. I want you to keep in mind what you saw here in geo. And the reason is because once high availability discussions start taking place, you can actually do high availability rollouts and incorporate the geo side of things so that now you have that redundancy and that disaster recovery in a fully highly available environment. Okay, so this here what you're seeing right now is more of the business continuity side of things as well as the performance side of things. But what happens when we grow beyond that what happens when we go bigger or we have a workflow that requires us to consider something more than single node rollouts. We can roll GitLab high availability. GitLab high availability means we're breaking out the components of the GitLab application server and its other supporting structure into their own environments into their own nodes. That's something that allows us to support tens of thousands of users in your environment. This is the same code that we roll on gitlab.com you're getting the same packages. So from that perspective, you are able to support I guess theoretically millions of users like gitlab.com is. The thing to look at here is balancing complexity with cost. You know what is the cost of downtime what is the cost of compute what are the things that you're trying to contain while at the same time, making sure that you have the performance but haven't over engineered the system. A highly available highly complex system may have worse performance than a simplified system with a couple of geo nodes depending on how many users you have. And that's a common mistake that I see made. So you want to know, you know, kind of what that that workflow looks like. Now, from the the advantages perspective. Obviously if you know the components that are broken out are being stressed or over or underutilized you can make adjustments accordingly just by looking through some GitLab logs so that's good for GitLab high availability. You can also troubleshoot and scale it at the component level right you can get a little more granular with things so something is going wrong it's less of a black box. And of course no downtime upgrades. While you can do that on a regular holistic GitLab server. It is notably easier when there's two of everything. Of course, GitLab can be installed in the environment of your choice. HA is no different can go behind the firewall or in your cloud of choice. But what's most important here is customizing the installation based on your specific needs and this I can't, I can't say this enough. There is no prescriptive installation of GitLab high availability. You have to iterate your installation based on your workflows and your usage patterns. You might mean taking consideration of large files like are you a gaming company are you producing gigantic files as your outputs. Do you have large repos overall large mono repos that you're dealing with or do you have a large system integration type environment where you've got extremely long pipelines running versus a microservices based architecture of some kind where you might have just an inordinate number of actual commits that kick off pipeline so you've got more simultaneous pipelines versus one long one. All of these various things impact what pieces you need to be concerned about when you roll GitLab HA and we'll go into those components right after this. You're looking though to eliminate single point of failure in all areas of GitLab. And since we come bundled with Prometheus and Grafana it's easier for simplified monitoring of all those nodes. So once they're up and running, you can actually monitor them real time with the products that are included with GitLab. So there's two types of scaling. One is the horizontal scaling mode. The other would be the fully distributed model and of course there's a way to hybridize that and be flexible and roll out more and more variations on the theme. But we'll start here it's a very simplistic model of taking the whole application server and duplicating it you're replicating the application server the GitLab application server and putting a load balancer in front of it. You're breaking out the components around storage around Redis and Postgres. And a couple things to note this diagram is not giving you an accurate count per se of what you're going to break out. But you can see here it does point to NFS. Now NFS GitLab doesn't have anything to do with NFS's redundancy or high availability that's going to be dependent on the NFS provider. Also, when you get to Redis you're going to have to have three nodes to be Redis high availability you have to have a quorum there or will not be truly highly available. Again that's specific to Redis but when you start breaking these out note the number of nodes right when we're talking about geo we're talking about single nodes. I had four of them rolled out across the globe and I was able to easily support my 500 users. In this case if I'm supporting 500 users, you'll see that I require about 14 different nodes in order to do so in a horizontally scaled mode because now we're breaking things out from the core. If you need more control than that, based on your specific workflows, we want to talk about a fully distributed model. And in this case you can see that things have changed a little bit the application servers are now broken down further into their front and back end components. And so you can see we separated out the traffic flows after the load balancers we've got multiple NFS or shared storage there. One thing that's missing from this diagram is get a lead get a lead I'll talk about in a little bit but essentially it can replace the NFS requirement. And it's the processing unit that keeps the relationship between the end user and all of the get traffic within the get lab application. Now it's nice that that can eliminate NFS but we still might have some shared storage requirements for other artifacts within get lab. That's something that'll be talked about again shortly. The most important thing that I've found is breaking out the sidekick. They're Q managers right so they're they're helping us with traffic and I always focus on the CI pipeline one it's one of the most common ones that I've had interaction with. So if you break out a sidekick queue for pipelines and it's insufficient, it'll be the first thing that shows up in your high availability system. You know we're going to be looking for bottlenecks on each of these items want to prevent those at all times. But this is an example of what that fully distributed model might look like. I'm going to mention AWS here just because about 70% of the time when people are rolling out a production instance of get lab in a private cloud environment. It is going into AWS right so if it's it's going into your cloud. 70% of the time my conversations center around AWS so I wanted to mention it here. Now there's a couple things happening one is you'll also note if you've looked you'll get lab deploys into a lot of different environments. We do have a Kubernetes Helm chart that is available. And I'm not going to talk about that here specifically today that has a little different scale to it and quite honestly the rock solid omnibus installation that I'm used to using here is the one that I'm going to focus on just because of its time and its extensive use across our user base. I do see us moving further into that Kubernetes space over time though and using a lot more of the Helm chart. The common components we use here then in that case are easy to for the application servers EBS for get data storage and again you can our sync that hot standby so that you've got to back up on that. We typically use S3 for artifact storage. ELB for front end load balancing RDS provides you ha postgres so you don't have to think about how to roll ha postgres. Same thing with a latch elastic cash for red as it provides the ha for you so you've got some simplification of your roll out when you utilize some of those cloud offerings. The one thing I will point out is we called out EBS for get data storage, not EFS and EFS we've had some problems with the IOPS as it relates to get data and just some compatibility issues there with EFS that's not a blame being placed on AWS or on GitLab. But I'll point kind of squarely at Git and say the way that you conduct the traffic Git you cause us some headaches with EFS so we know there's a compatibility issue there especially as the user count rises and it does not have to rise far in order to create performance issues inside AWS so recommendation there to avoid EFS for storage. As far as the application architecture. This is just kind of reiterating some of the things that we talked about already where Giddily you'll notice is kind of that it's tied into a lot of different areas it is that get traffic coordinator if you will between the different components of GitLab. The other reason I wanted to show this one though is just so you can see those those ports out, you can see 2280 and 443 are the ports required for external communication for interaction with your user base and with any other platforms tools and ecosystem beyond GitLab. So those are important things to note as well and I just wanted to bring this this application architecture picture up to give you that that little insight. So to recap what the important components are here's some of what those components do, and you can see the application nodes now again is the application is the GitLab application node we can break that out into unicorn Puma and workhorse workers so that we can separate out web requests that was what you saw in the distributed model. That's basically breaking out front end and back end components from the GitLab application server which again under about 500 users, maybe even 1000 users, I don't know that I would do that I would keep that application server together to reduce complexity. Sidekick. Again, that's your queuing mechanism for all the different jobs that are running in the background so we want to have enough of those nodes. Interesting note is that that is something that we're considering breaking out further and being able to utilize with other areas of GitLab for producing not just high availability but different configuration options going forward. So think if you're just using the core application server you don't have to break everything outside kick alone can be broken out and scaled separately. Postgres. PgBouncer of course is an important part of that you want to make sure that that's installed correctly typically on the application servers or something like that. You want PgBouncer to be able to keep Postgres alive on its different nodes. Same thing with Redis I mentioned the quorum earlier and the idea that the sentinels helping there with the failover management. And really I talked about a little bit and how it can help you avoid LFS, NFS for storage of the Git data. You can also see here that it coordinates the access across the repos, but you still do need an object storage of some kind, and in AWS it might be S3 and other places that may be NFS or EBS or some other kind of storage like that for those artifacts and uploads. So that's load balancing I get this question a lot what do we use for load balancing on AWS I mentioned ELB but HA proxy seems to be about the most common one that we see in use today. That is also what GitLab uses on gitlab.com actually have a recipe available for that that you can find publicly available. So that's what I'm referencing for the application nodes and then of course from the monitoring perspective I mentioned that Prometheus and Grafana come available with the bundled load of GitLab omnibus package. So from the perspective of what's most important on the page notice that the application nodes does not get an asterisk, but sidekick Postgres, the object storage and the load balancer do. The application node itself typically isn't the first suspect when things go wrong it's a configuration of the queuing mechanisms. It is the Postgres configuration. It is the object storage limitation or network limitation associated with balancing the load that becomes more of the contributing factor to our performance limitations. So those are some things that you want to watch closely as you roll something like this to make sure that those pieces are in place first before we start to try to diagnose the GitLab application node itself. Also from the perspective of an uptime if there is a catastrophic event if we do want to add capacity, you know what are the persistent versus the ephemeral components of the GitLab architecture persistent being we have saved state and user information ephemeral components being things that can be recreated at will. We don't really have to worry about anything persisting within those environments. So persistence wise of course the database is important the Redis cache is important I mentioned user information right that's an important piece here. The Git file system is going to be important your Docker registry and all of the uploads are going to be important. Those are all things that are persistent we want to make sure we ensure those are highly available outside of GitLab's high availability function right so when I talked earlier about if you're using NFS you want the the NFS product provider to give you the method to be highly available. There's a Redis cache system and a Postgres system right to come with bouncer and Sentinel they're ready to be kept alive they need to stay up. Whereas ephemerally, if I have GitLab components for front end and back end or the application server at large. Those are things that I can recreate spin back up and not have a data loss so that's an important piece to note. Same thing here when it comes to the services if I lose a load balancer or have networking issues of course those are things that we can bring back up without data loss. What does it look like if I wanted to roll HA for maybe up to a thousand users. Well I mentioned earlier about 14 different nodes that you might need to roll out. This is what those nodes might look like. There is some consolidation potential here depending on how you roll Redis what you do with your console nodes. You might move from an NFS and get a lead perspective there are ways to reduce this down a little bit you might be able to get down to 10 or 12 nodes. But out of the gates this is what you might find in our documentation. The implementation does in fact have different high availability implementations cited now for 2005,000, 10,000 user type environments. And so you can go there and get some information about just how many API calls and get calls per person does that support per second. That's a question that we actually heard on another webcast like this one. And there's a lot of concern about well how many different people can interact with this thing simultaneously or what happens if I put automated users on there think bots or other interactive components that have a high API peppering rate if you will. And the answer there is, you know, these are created this particular thing you might see 10 different API responses per second per user. If you start seeing 20 or 40 API requests per second from your automated users it may be time to take a look at what you're doing there because I know we rate limit on get lab.com. Accordingly, because we just don't want all those automated users that are trying to pull replicate import whatever they're doing to cause us performance issues and that is certainly where they'd come from. This here should be adequate support for up to 1000 users. And notice, you know, if you're going to use giddily storage there's an rsync replica option there to try to help you keep redundancy on that giddily node it by nature is not highly available. At least not yet. And if you're going to avoid NFS we do recommend that replica be utilized. The other thing I'll call real quick here too is you'll notice that the requirements on these aren't real high from a compute perspective, except for the application node. And there's only two application nodes in play here so that is to be expected. So what happens when something does go wrong or what should we expect to go wrong. So the first question is what I might my get lab installation is slow, my get lab performance is not what I expect it to be. So the problems we commonly see there are we have too frequent of a commit for the configuration that's in place so how often are we committing data, or how many large commits again to back to the model of I have, you know, multi gigabyte files and constantly moving in and out or more than that, or have we overloaded the system simply with too many users for a single node. And the first thing to try there is to add get lab application servers and or double the RAM or possibly the compute on that server. That's a simple first step in overcoming performance issues before we move into a geo redundant or a highly available system. So if you find yourself having performance problems, and you're under that 300 user mark, we definitely want to look at what the workflow is within your environment. Before we consider moving you into a highly available more complex system. Too many pending or slow CI job execution. So if you've ever used get lab and found that one day all of your CI jobs start backing up, or maybe they don't get picked up for about seven or eight minutes. It is oftentimes something to do with the sidekick nodes. Okay, it could be also part of the Puma nodes that are part of the get lab application server. But the sidekick nodes are what we've commonly seen so I was on a call for an outage, not long ago wasn't a full outage but performance kind of slowed way down and the CI jobs queued way back and they just would not recover. The first thing is those jobs queue up, and then they continue to queue up right and the first thing that an application does is not say well let's wait and see what clears up. The first thing application does is tries again. So now those cues are being peppered with more and more requests and things are backing up further and it just gets worse and downwards spirals. So, if you're in a situation like that, we can look at the logs and kind of see what's backing up what caching mechanisms are causing problems. And a lot of times it's because sidekick was barely configured enough to handle the load and it's been doing just fine but we hit that one day, where we all committed at once because it was close to release date and we all to crank our changes in or we had an urgent set of patches or whatever that looks like that maximize that traffic that one time, and then things slowed down so sidekick is the key to that. Now, on the other side of that if everything is working good on the front and back end of GitLab, we also may have a runner problem, we haven't talked much about runners today but runners of course being the job indicators that grab the pending jobs and run with them execute them and return the status back to the server. The runners, if they're under configured or not set to auto scale when they run out of capacity you have the same symptom. You also see the pending jobs until a runner is available and comes to look for another job. The other thing with runners that's not mentioned here is we've seen where tagged runners are set incorrectly. The jobs are working to a tag for a limited set of runners that weren't set to auto scale. So again, it's a fairly simple diagnostic there, but it is something that we see from time to time where the tags are actually limiting the use of the runner pool itself. Last but not least is just downtime and outages. So I will say if you are seeing your GitLab instance go down or suffer from the need to be rebooted on any regular basis, something is wrong. That's not normal for us at all. GitLab has a really good performance rating overall when it comes to self-hosted instances and we just don't hear that complaint a lot. So when we do, we certainly want to look for what's going on. If you're in that state, we want to start by reviewing the logs, looking for errors, looking for communication issues, looking for backlogs and pending cues that are causing some of this to occur where the meltdown begins. We also want to verify the networking and make sure we understand that there's sufficient path between any of the components that you might have broken out. That includes load balancing, that includes PG Bouncer and where that's installed for any remote post-gres installations, any HA Redis breakouts and those kind of things. We want to make sure we look at all those pieces very closely before we roll into any other type of diagnostics. Once these kind of things are proven, again, we'll go back to the idea of the workflow. What are you trying to do with your instance? And what are you trying to manage? What are you trying to import? What are you trying to commit that may be causing some of these problems? Or what does your CI setup look like that could be taxing the system in an extraordinary way? So here's the things we touched on. GitLab Geo, GitLab High Availability. A couple of things we didn't talk much about and that's live upgrade assistance from GitLab support. If you're using GitLab Premium, don't forget that we have live upgrade assistance with our support team to walk you from one version to the next. That is free with your purchase of GitLab Premium as well as priority support. So your SLA improves over anything else that you might be using. And you can also get 24-7 priority support in case you do fall into that last item I mentioned and have some kind of outage on hand. We hope you don't. Elastic Search was mentioned on the EWS side of things. That is an integration with the starter package that's readily available. And then last but not least is GitLab Premium users. You get a TAM, a technical account manager. So provided you meet the minimum requirements for the subscription that you've purchased from GitLab, you will get a technical account manager as part of that package. That person is going to be the one who helps you from a health perspective, an adoption perspective, and just making sure that you're getting the most out of your GitLab instance every day. So that's part of the GitLab Customer Success Group that I'm a part of and I'm always excited by the ability to tell people about that. So with that, I want to pause and just see what kind of Q&A we have. Awesome, Joel. Thank you. So just a reminder to all of the attendees, you can submit any questions into the Q&A section at the bottom of your screen. Right now, Joel, I only have one question. So I will go ahead and start with that. And if we have any more come in, we'll go from there. But the one question that we do have is how do I get started with HA? Well, getting started with HA, I think that taking a look at our documentation and starting simple are the two things that I can call out. You want to start simple, leveraging GitLab's documentation. Like I said, there's a recommended architecture section of our docs online. Publicly available, you can see what it looks like to roll, say, a 2000 node configuration or 2000 user configuration, how many nodes that might require and their associated sizing, as well as how many API calls per second, that particular rollout would include. So I think that's the first thing. And the second thing is, you know, also understanding your workflows. Again, looking at how does work pass through your system on the average day across all of your projects, not just the few. There are going to be some that tax a system more than others and you'll want to be ready for that as you move into a design phase. Last but not least on this front, we always recommend when you move into GitLab high availability that you consider our professional services offerings because GitLab professional services has been helping folks deploy complex on site deployments or cloud deployments for quite some time and has a lot of experience to offer in this area, especially when things move into high availability, as well as a geo replicated environment, there's some real intricacies there and that is something that we'd love to help guide you through. Awesome. I don't see any other questions in the chat or in q amp a so we will give everybody a little bit of time back on their calendar today so Joe thank you for the informative session and for everyone else. Thanks for joining us, we will be sending out the recording to this webcast over the next couple of days to be on the lookout for that. Everybody have a fantastic rest of your day. Thank you.