 The GitLab platform is designed to be simple. You can consume it as SaaS or deploy it almost anywhere self-managed. And you can support just about any DevOps workflow with very, very little effort. It is designed to provide maximum value with as little configuration as possible. But sometimes the standard Omnibus installer might not be the right choice for you. You might have business or compliance or technical reasons that require a little bit more configuration. Genworth Financial is one of those companies and for various reasons they chose to run GitLab in containers orchestrated in Kubernetes. In the first of two talks back to back, Frankfurt will explain why Genworth made the architectural decisions they did. And following that in the second talk, Ryan Heilman and Mohammed Malik will walk you through the technical details. They'll talk about problems they encountered, solutions they came up with, and lessons learned. Let's jump in. Hello, everyone. I hope you're enjoying Commit so far. Thanks for attending this session centered around migrating and managing GitLab on Kubernetes. I'm Frank Ford, a senior IT architecture design manager for Genworth. My GitLab and Twitter handles are on the screen right now. A little bit about me. I've been at Genworth for 15 years. I'm a graduate of Virginia Tech, Go Hokies, with a degree in computer science. And when I'm not doing something technology related, I'm typically at a Virginia Tech football game in the fall months or at a race track. I do a lot of motor sports, particularly NASCAR and EMSA, so oval track and sports cars on road courses. Or I'm doing something around the farm. I live on a rather remote active farm. And so if I'm not doing something tech related, I'm probably outside working on something. And a little bit of cross section here, because it is rather remote and rather rural, getting good internet out there is somewhat of a challenge. But I have kind of worked through some solutions for that. So message me after if you're interested. A little bit about Genworth. Genworth is a financial services company. It's currently active in long-term care and mortgage insurance markets. It was founded as the life insurance company of Virginia in 1871. We have primary office locations in Richmond, Virginia, which is our headquarters location, Lynchburg, Virginia, and Raleigh, North Carolina. Lynchburg, Virginia is our primary customer service facility and also has a lot of IT. And Raleigh is where our mortgage insurance business is located. So going over today's session, today we're going to discuss the handful of installation methods that GitLab supports. We're going to touch on Kubernetes requirements to run GitLab on Kubernetes. We're going to discuss some migration strategies. We're also going to discuss some post-migration maintenance activities. So as mentioned on the previous slide, GitLab does support a handful of installation methods, native Linux packages being the most straightforward and probably the easiest with the least barriers to entry. Although some would argue that depending on how mature your organization is with containerization and the supporting processes, that process could actually be a little bit easier. GitLab does support these native Linux packages on the three major cloud providers. GitLab also publishes an official Helm chart for deployments to Kubernetes. And if you want or you have a special need or requirement, you can still compile from source, but we're not going to cover that today. And of course, you also have the option of using GitLab.com. And that may actually be the absolute easiest way to play around with GitLab and familiarize yourself with the product. But depending on your organization's stance and putting source code and deployment processes on a software as a service provider may not be an option for you outside of personal use. For this presentation, though, we're going to focus on on-prem installation methods or installation methods for cloud providers in an infrastructure as a service context. So let's talk about native Linux packages. Native Linux packages also refer to as the Omnibus package or the Omnibus install. This is the officially recommended installation path by GitLab for the majority of their customers that need an on-prem installation or they just want to play around with GitLab on their local machine. It's a very good place for folks who are familiarizing themselves with GitLab to start. And it also helps introduce some of the back-end pieces that GitLab has without actually having to explicitly manage them. If you know people that are interested in GitLab and GitLab.com is not an option, this is a great place to kind of get their feet wet with the product and with a full DevOps platform and CI CD. It can run on existing bare metal or you can provision a new VM or you can leverage an already provisioned VM. It also allows for growth by providing some more advanced configuration options. You have the ability to set up some high availability stuff. You can leverage GitLab GO and have copies of your GitLab instance in different physical locations, which is great for distributed teams or teams that work in different remote parts of the country or around the world. And we'll touch on that a little bit more later in the session. And also using external object stores and database, once your comfort level increases with the product, you can kind of get out of using the GitLab managed stuff and that may ease some things a little bit later. So single node default omnibus installs work just fine for small teams and maybe large teams too, depending on how much bandwidth you have. If you're looking to use the defaults or you're using the defaults, upgrades are extremely easy to automate and they can be automated with a simple script and you can run them at the specified time with basically any scheduler. And also as an option of GitLab kind of increases and GitLab grows within your organization, typically the processes that rely on GitLab also grow. And there may be some mission critical stuff that gets embedded in the application. So for example, let's say you set a company policy that only GitLab CICD pipelines will deploy to production, which means you can't have GitLab down for an extended period of time just in case there's an emergency break fix that needs to be deployed. Also, if you start putting mission critical stuff out there, you need some type of HA where you need the application to be somewhat resilient, you need some type of DR scenario. Now these native Linux packages can also be deployed on VMs provisioned in cloud providers. So GitLab does officially support using these native Linux packages on AWS, GCP and Azure. This offers a number of advantages in the cloud provisioning is typically faster because it kind of aligns itself more with self service type processes. But typically faster depends on your organization's security with cloud deployment processes and procedures. You can also be more in control of the resources that are being used and buy more control, meaning you can pick the size of the VM that you're using, the resources that are being provisioned in the cloud provider. So for example, if you're going bare metal, typically your on-prem data center will have like a standard machine that they'll buy and maybe depending on your need. So you'd likely have to over buy for the resources that you actually use and sometimes they apply the same methodology to VM provisioning with t-shirt sizes instead of provisioning based on requirements. Cloud lets you provision exactly what you need and then expand it later if necessary. But VMs are kind of boring. VMs in the cloud are a little bit more exciting than VMs on-prem, but in the end there's still VMs and we're all tech people and we like to live a little bit closer to the edge or at least I do. So I think I'd rather run this in containers. And GitLab is also somewhat of a complex application with a lot of moving parts and pieces. So we would need some type of robust container environment to manage it. So enter Kubernetes. Kubernetes is a container orchestration platform. In other words, it's a full container lifecycle management and management of the resources that are needed to support that container and ultimately the application that's deployed. It has very wide adoption. It's widely considered the standard for container orchestration. And because of this, a lot of software vendors are making sure that their applications are either containerized or built using containers and can run on Kubernetes. So the first order of business is to pick a Kubernetes distribution and there's a ton out there. Major cloud providers have implementations that can be leveraged. So Google has the Google Kubernetes engine or GKE. Amazon has the Elastic Kubernetes Service or EKS. Microsoft Azure has the Azure Kubernetes Service or AKS. And if your organization won't let you provision a cluster in the cloud, on-prem distributions are readily available, such as Rancher RKE or K3S. Or if you want to get a little bit more in depth and kind of test your knowledge or grow a little bit in terms of learning kind of the underpinnings of Kubernetes, you could bootstrap your own cluster using QBADM. The recommended minimum cluster size to run GitLab is 8 vCPU and 30 gig of memory. There are some examples on the GitLab website that are mainly intended for if you're just playing around with GitLab or you're doing some R&B or you're spinning up a dev instance, you can get away with smaller amounts of resources. So there's a 3 vCPU 12 gig of memory example on the GitLab website as well as a 4 vCPU 4 gig of memory example for MiniCube that's intended to be run on a local machine. Now that we have a cluster, how do we actually go about installing GitLab on Kubernetes? So this is where Helm comes into play. GitLab officially supports and distributes a Helm chart for installing GitLab on a Kubernetes cluster. The purpose of Helm is to simplify the deployment of the entire application and its supporting resources into a Kubernetes cluster. So think of Helm as kind of the package manager for Kubernetes. It's analogous to apps get for Debian or YUM or DNF for Red Hat-based distributions or for Ugen2 guys, it's a merge. GitLab for example has the application itself, a Postgres database, Minio Object Storage and a handful of other components that are all going to be deployed individually to make GitLab work on Kubernetes. And Helm helps you manage all of these different components together. So Helm uses what's called charts. Charts are the file that describes how these components are going to be deployed. It's very similar to the native Linux packages that we discussed earlier, in that it's more or less one command and you're done. I say more or less because there's a little bit of setup that's required. So to install Helm or GitLab via Helm chart, the very first thing you need to do is install Helm. Helm is installed locally. Once you have Helm installed, you can use the first command listed up here on the screen to add the chart repository to Helm. This will kind of tell Helm where to go to get the chart and download the chart. Then you can use the second command to instruct Helm to install GitLab on the cluster. Just make sure that you replace the variables in here with ones that are appropriate for your installation. Helm and Kubernetes will then churn for a while and when they're done, the application should be installed, configured, and available for use. So that's it. We're done, right? We just installed GitLab on Kubernetes via Helm chart. Presentation's over. Is there any questions? Well, since this is virtual, we can't actually interact via this video. But actually, the base Helm chart is not a recommended production setup for a number of reasons, and they're listed on the GitLab website. But before running Helm install, you should review the documentation at the link that's in the presentation right now. It's going to walk you through a number of considerations that need to be reviewed. So the base Helm chart is really intended for R&D purposes. So before you run Helm install with the defaults, potentially back yourself into a corner, please review that documentation. And you'll need to make appropriate decisions for your installation. Those decisions and considerations and adjustments are going to be unique per organization. There is no one-size-fits-all solution here. Like I said, it's going to be unique per organization. So now we've covered how to install GitLab via Helm chart onto your Kubernetes cluster and we've tailored it for your organization. Let's assume for a minute that you're migrating from other infrastructure and didn't use an external database or a pluggable object storage, so we have to migrate. The bundled components that come with the Omnibus installation make life a lot easier when it comes to installing GitLab and the maintenance of GitLab post-installation. However, it can pose a problem when migrating infrastructure. External databases and object stores make life a lot easier when it comes to this migration. Depending on your situation, you may have to manually migrate certain things, especially if the version is changing. And when I say manually migrate, I mean manually executing some GitLab rate commands to actually perform some migration operations. The internal Postgres database that comes with an Omnibus install can be a particular challenge. GitLab has actually released official documentation at the link that's on the screen that's centered around the process of migrating from the internal Postgres database to external object storage and an external database so that your Helm-based deployment can use those for the deployment. You, of course, have the option to brute-force copy data over to your new instance. And GitLab offers the ability to export and import individual projects if you're doing a targeted migration and not a mass migration. This method will likely result in significant downtime in order to avoid drift while you copy your data. So significant coordination with your stakeholders will be required. And of course, the disclaimer of the manual migration tasks and running GitLab rate manually still applies, especially if your version is changing. Now, this is a little bit more advanced, so I mentioned earlier that we were going to talk about GitLab Geo a little bit later in the presentation. This migration technique, like I said, is slightly more advanced and relies on the GitLab Geo functionality, specifically the ability to promote a secondary node to be a primary node. In a nutshell, you can set up geo replication and let the GitLab instance sync to the secondary node when you sync to the secondary node. Then you work with your stakeholders to pick a convenient time to complete the migration. The migration is completed by promoting the secondary node, which in this case could be a Kubernetes cluster, to become the primary node and making the appropriate DNS changes. This is not a zero downtime migration, but if done properly, it can significantly lessen the amount of time that the system is down. And it also minimizes the risk of drift. Cabinet here is there can be some issues if the versions that you have don't match. So if you're trying to do a migration and an upgrade at the same time, there could be some issues, so keep an eye out for that. Also, this is not officially supported by GitLab, but it has been vetted by GitLab technical resources. So it is still used at your own risk, but the link on the screen links to some documentation that alludes to this method as being a way to actually accomplish a migration. So we've installed GitLab on Kubernetes. We've migrated our data over and folks are using it. There's a new feature that's been released and we want to take advantage of it, so we need to upgrade. With the helm-based deployment, we can upgrade in two-ish steps. I say two-ish because, again, your situation may be slightly dependent based on, you know, is there a proxy involved? Is there a jump box involved? But I'm just going to show the happy path here. The first command, running the first command, extracts the set arguments that were passed when GitLab was installed. Then using the output from that, you feed that to the second command and GitLab will or helm will upgrade GitLab using those arguments that were passed to it when the installation was first performed. Executing the second command, helm and Kubernetes will then take over and perform the upgrade. Well, hey, that upgrade process looks fairly simple and more importantly repeatable. So can we automate this? Well, first, you have to think as an organization and determine our automated updates for you. Depending on the frequency that you choose, you'll either be living on the bleeding edge or very close to the bleeding edge. And there may be some things that kind of slip through the cracks from time to time that ultimately may cause some problems. GitLab does a very good job at preventing that, but it does happen from time to time. So your organization needs to be okay with accepting that risk. Now, let's say your organization is okay with that and wants to pursue automated updates. Automation could be accomplished by a simple script. You know, it's simple, it's easy. You can be scheduled via a number of methods. It's not flashy, but it does work. If you have an automation framework like Ansible or something similar, that will work too. There are Kubernetes operators that will be available with Helm. There are Kubernetes operators that will help aid in Helm deployment upgrades. And if your organization is a bit more mature with some of their deployment practices and is leveraging GitOps for managing Kubernetes deployments, something like Flux CD or Argo could also handle this. All right, let's put a bow on all of this. So the business value proposition here is depending on your deployment target, whether it's on-prem or cloud provider, you can help meet your organization's cloud migration objectives or at the very least have your CI CD platform deployed in a cloud native type manner. This should ease the transition to the cloud when the time is right. And you can help kind of start leveraging cloud native principles and working through some of the supporting infrastructure that goes along with that. So things like automatic certificate generation and rotation, things secrets management, things object storage, how are we going to handle databases, various things like that that you kind of have to think about as you're moving to a more cloud native type architecture. It helps achieve greater workload density, so you make better use of your resources inside of your data center. It could potentially provide a larger pool of resources for your DevOps platform to use without having to over provision because you're talking about shared infrastructure. The DevOps platform starts to be managed like the rest of the applications that you have that have been through an app modernization exercise or rebuilt natively using containers. Migrating an omnibus install to Kubernetes is not a trivial process, but is also not as daunting as it may seem. Don't be scared or intimidated. If there's one takeaway from this entire presentation that I want you to have is, please don't be scared or intimidated, you can do this. And that applies to more than just get lab on Kubernetes. Don't be scared in general. Just try things. Don't fall victim to the push to move things to the cloud or hyper cloud as quickly as possible. Know that your organization may be pushing you to move things to the cloud to say we have to be in the cloud. Be strategic in your migrations. Certain things make sense to move to the cloud. Other things don't. And those situations need to be evaluated. So ideally, this is not your introduction to Kubernetes. Ideally, your organization has mature infrastructure processes and a support organization to be able to handle an effort like this. But if you are just getting started, a single node deployment would be a great place to POC or familiarize yourself with the product and CI CD processes and or just start to develop processes to support this infrastructure. And this is the end for real this time. It's not a joke this time, I promise. Thank you for watching. And I hope you enjoy the rest of commit.