 Now that we know what Genworth was trying to accomplish and why, let's look at how it all came together with Ryan and Muhammad. Thank you for coming to our GitLab talk. We're gonna be talking about our journey from Omnibus GitLab to Kubernetes and what we've learned and how you can do it yourself. So without further ado, a little bit of introduction for us. My name is Ryan Heilman. I'm a software engineer at Genworth Financial as part of a rotational program they have there. You can find me on LinkedIn or on Twitter at Ron179. And with that, I'd like to introduce myself as well. I'm Muhammad Malik. I'm also a software engineer here at Genworth Financial and those are my Twitter handle and LinkedIn profile. Please feel free to add me. Let's get right into it. First off, we're gonna start off with a brief overview and I'd like to preface that with our manager, Frank Ford. We'll be giving a much more in-depth discussion. As to what necessitates a migration like this, however, a brief overview of that would be that there are a multitude of reasons why many of them are inherently within Kubernetes themselves, which is scalability, upgradability, and fault tolerance. And with that, we're just gonna go ahead into the technical aspects. And so, let's get started with the infrastructure of your cluster. You want to ask yourself these imperative three questions. Where is the cluster located? How is the cluster infrastructure configured? And what's deployed in the cluster already? And so to answer that, really, it really doesn't matter for the first question. The first question, it's Kubernetes is Kubernetes. Whatever cloud provider you're on, the nuances between each, there's gonna be slight differences. But regardless of that, Kubernetes is Kubernetes. Second off, you want to get into how is the cluster infrastructure configured? You want to take a look at persistent storage, TLS resources, all right, TLS and what resources are allocated. Lastly, you want to take a look at if GitLab Runner is installed in the cluster. You want to take a look at if you have objects stored via Minio or traffic versus IngenX deployed in the cluster already. And so if you go ahead and onto the next slide, please. And so with that, you want to take into account how the cluster's infrastructure is configured. And what we mean by that is how the cluster is set up to handle things such as persistent storage, TLS, and what's running in the cluster. As I mentioned before, proper configuration for persistent storage is imperative. This is something that really confused us two for a bit since improper configuration, excuse me, can result in unexpected behavior. For example, it is vitally important to have persistent storage volumes dynamically allocated and to have the scope of an existing Minio deployment limited to just a single persistent volume. However, this is standard Kubernetes convention and as you work on and deploy a Kubernetes cluster, this becomes quite familiar. When we began this project, we were working with an immature cluster where the persistent volumes were manually allocated and recycled as needed. Additionally, our Minio instance was not properly scoped. However, these factors ended up wiping our GitLab deployment in the cluster. And this is obviously not what persistent storage means and is supposed to act. And so this took many painstaking days of debugging and testing to determine that dynamic persistent storage was the root cause of all of this. And so you wouldn't want to do what we accidentally did where we accidentally deleted our object storage at 4.30 on a Friday. That's all right, though. We ended up recovering from that. Furthermore, let's go into how TLS infrastructure is set up. This is key to getting your GitLab deployed to the cluster. This is more so of an issue for on-prem clusters. And so it could apply to the other applications and cloud providers, however. It's more specific to on-prem. The GitLab Helm chart natively supports using CERT Manager for certificates, which is also the Kubernetes standard for certificate management, as well as CERT Manager easily connects with a number of signing sources, notably Less Encrypt and Hashicorp Vault. One tidbit to be aware of is that the internal cluster certificates are needed for GitLab's many processes and they may be preemptively terminated at your load balancer. Finally, you will need to consider what is already in the cluster. The GitLab Helm chart, which is a package manager, which we'll cover in a minute here, it installs a number of dependencies by default. Many of these dependencies are very common and popular in Kubernetes cluster architectures. So there is a good chance you already have them installed if you have a Kubernetes cluster deployed. However, we'll cover what to do in these cases when we go over the Helm chart. Could you actually go back a slide, please? Furthermore, going into your cluster infrastructure, you GitLab recommends that you have eight virtual CPUs and 30 gigabytes of RAM for its minimum requirements. However, with that, we've also seen that the GitLab runner can consume a lot of resources and takeover nodes. So you want to make sure that you do also have some extra resources allocated for this. Go ahead and go into the next slide, please. Thank you. And so getting started with the prerequisites and Helm package manager. So the four prerequisites, can you go to the next slide, please? Thank you. So the four prerequisites for deploying GitLab into the Kubernetes cluster is first, you want to ensure that you actually do have a GitLab instance running. You want to make sure that it is not in a downstate. It is up and running and good to go. You want to verify the integrity of all your Git repositories prior to the migration to ensure full transfer of data. As well as the third prerequisite, which is the most important, you want to make sure that you have a deployment of GitLab in your cluster that is the same version of omnibus of your omnibus installation. This is imperative as the configuration between the two, as well as the one-to-one transfer will be different if your Helm deployment in the cluster is, say, an upgraded version. This may cause some unexpected errors and loss of data. And so additionally, object storage must be set up and ready to go for the Helm deployment. GitLab natively or inherently within the Helm chart helps deploy Minio. However, you can use any external object storage such as AWS S3 or Google Cloud Storage. And then you can go on to the next slide, please. Furthermore, the actual GitLab Helm chart. Helm is a package manager for Kubernetes. It's one of the many widely known package managers. It allows you to install various packages called charts to keep with Kubernetes subnautical terms in the cluster and configure them with a values.yaml file. You must use the appropriate Helm chart for the version of GitLab you wish to install. The versions are nine off from the actual GitLab versions, and so GitLab's newest release of 14.0.0 would be associated with the Helm chart version 5.0. Furthermore, we have the specific chart location as well as the how to download the chart locally to your local machine and template it out on the commands there listed below. Go ahead and on to the next slide, please. And so here we have included a few screen caps of the Helm chart to briefly go over here. You can definitely see one second, I apologize. In the top left-hand corner, you can see that the host domain has example.com, which you can specify to your URL endpoint. There is HTTPS, which is enabled true, which you'd want to because it's statistically known that many company or developers use HTTPS over SSH instead of SSH when pulling and cloning repositories. And so you want to ensure that this is true to allow full functionality there. Furthermore, Ingress, you can have the Nginx, which is default in Kubernetes provider. However, traffic is also works as that's widely used. Further, you have MinIO below that where you can specify or install and enable MinIO through the Helm chart itself if you don't have object storage previously set up. Here you can also specify the credentials and there's also additional services such as Grafana, which is a metrics dashboard that come within the Helm chart that are allowed to be deployed and you can watch the CPU usage and memory usage of your cluster. You can also granularly deploy object storage and large file systems with their own credentials and their own services. Lastly, you can also have CertManager, like I mentioned with Let's Encrypt, deployed here via the Helm chart. However, as you can see here, it's commented out and you can have it manually installed it or you can install it yourself and connect it with your own instance of CertManager. And with that, I'll pass it off to Ryan to further get into the migration. All right. Thank you, Mohamed. So let's talk about actually getting this installed in Kubernetes. Mohamed kind of walked you through a bit of the Helm chart just there and how you can configure different parts of it and tweak different aspects of it to get you the version of GitLab that you want to be running and that works for you. But we actually want to get this thing in the cluster and working with it. So once you have your values file configured, it's all set up, all you have to do is run a simple Helm install command, pass it that file, give it a name and tell it where the main chart files are. And it will do the rest. It'll use your kubeconfig file to install it in the cluster for you. If you have a reinstalled runner, you're going to need to acquire that runner registration token and give it to the runners. So that way it can do those CICD pipelines and you can test that out. Now, you're going to be reinstalling GitLab many times. You're not going to just install it once and be done. Likely you're going to be tweaking the values file, installing it, see what works, take it down, change a couple other things, reinstall it. So you're going to be very familiar with this process. Now let's actually get on to the nitty-gritty details of doing the migration. We finally made it. We've gone through and we've installed GitLab. We have it running and working in the cluster. Now we need to get all of our existing stuff on there. Right now it's just a blank version of GitLab. It's not very useful in terms of comparable to our old version with all our repo stuff. So the first thing we need to do is migrating existing files to Object Storage. We're going to need to hook up our Omnibus GitLab version to the Object Storage that we're using for the Kubernetes version of GitLab that we just installed. So what you're going to want to do is go into your GitLab Rails file and you're going to set up this Object Storage connection. This one specifically for uploads in this example here, but you're going to want to do this for any uploads, artifacts, LFS that you have. And set it up with some sort of Object Storage in this example. It's AWS S3 or it could be a Minio S3 simulation. You can use another Object Storage system as well. You're going to run a GitLab CTL reconfigure command and then a GitLab break for each of the objects that you are going to be migrating this one specifically for uploads. But like I said, you're going to do it for artifacts, LFS, etc. Next, we're going to need to do a backup of the Omnibus installation. We're not really doing a migration the way we think of the word. We're actually doing more of a clone and recovery to the new Kubernetes version of GitLab that we just installed. So the first part of that is going to be creating a backup tarball. It's just going to run a simple GitLab write command and we're going to leave out the artifacts and LFS and uploads because we already hooked those up to our Object Storage. The tarball needs to use the name convention of timestamp version GitLabBackup.tar. The timestamp and version are going to become important in a future step, so keep that in mind. And we're going to place that backup tarball into the GitLab backups bucket in our Object Storage service that is connected to our Kubernetes instance of GitLab. You can also host it on a public URL that can be accessed from the task runner. Or you can get a local copy and then place that local copy into the task runner pod and run it on the pod itself. But we're going to go with the default method of just using the GitLab backups bucket for the rest of our explanation. So next we need to move on to restoring the secrets. We need to update them to what they were with our Omnibus installation instead of whatever kind of generic secrets that it created when we installed the GitLab file. So first thing we're going to do is create a local YAML file and use the values that were found in the gitlabsecrets.json file on our Omnibus installation. To provide us a Kubernetes secret, we're going to use this format here and just fill in these values from the GitLabSecrets.json file. Next we just need to delete the existing secret using a kubectl delete secret command. Very simple. And we're going to recreate that secret with the same name as the one we just deleted. And we're going to pass it in that file, that local YAML file that we created. And all we need to do then is restart the pod so that they use the new secret. We can use this by just deleting them. We have the deployment still up so they'll come right back with the new secrets ready to go. So moving on we can actually restore from that tarball. That's actually a pretty simple process. We can just use a kubectl exec command on the task runner pod and give it the timestamp version of the name in our tarball. That's why that was important. We're going to reference it here and it's going to go look in that bucket and get the timestamp version tarball from it. If you are restoring from a URL like we mentioned was possible earlier, you instead of doing the dash t timestamp version here, you would actually use dash f and give it the URL instead. The restoration process will erase all the existing database contents and replace them with the contents of the tarball. This can take a different amount of time depending on how big your GitLab instance is. But once that is up and running, you're going to need to get an update the new runner registration token from your recovered version of GitLab because it's going to have been overwritten by the backup. You're just going to get that update your runner again like you probably did during testing. Finally, one last thing we need to do is we're going to need to run our kubectl exec command on the task runner pod to enable some Kubernetes features. Since we're coming from omnibus to Kubernetes, GitLab has some special Kubernetes features that we're going to need to turn on. That's it. You've done it. Congratulations. Your GitLab instance has been migrated to Kubernetes. It's all ready to go. All we need now to do is point your DNS entry so it points to that Kubernetes instance instead of your old omnibus instance. You can get rid of that old thing. Who needs it anymore anyways? We're all ready to go rock and roll with the new Kubernetes instance. Well done, guys. Thank you. Thank you for listening to our talk. We should be around for a bit of a Q&A. Thank you for listening and good luck with your migration. Have fun with the rest of GitLab. Commit, guys.