 When you build on a DevOps platform instead of a DIY tool chain, the entire company benefits even if you start with just one team. So the work that team does starts to create and store value that other teams and other roles in the company can leverage as soon as they adopt the platform. When you have that end-to-end visibility, everyone makes everyone better. And as more people and more teams adopt the platform and start to collaborate and contribute, that can fundamentally change the way that you work. And that really is the point. It's not about the tools. It's about transformation, whatever transformation means to you. In our first talk, Sean Corkham will discuss what transformation looks like at Northwestern Mutual and how they've gone from a small engineering team to a much larger adoption of the GitLab platform and what that means for them as they develop software going forward. Let's listen. Hi, everyone. I'm Sean Corkham, the assistant director of engineering responsible for the CICD platform team here at Northwestern Mutual. I'm here to talk to you today about Northwestern Mutual's digital transformation, and in particular, the role that GitLabs played in it for us. Now, obviously, digital transformation is a popular topic for the last couple of years, and it's something most all of us are doing and are at some varying stage up. The real question of the day, though, did we at NM set out to have GitLab play a foundational role in our transformation? Well, as much as GitLab would probably love for me to say absolutely, that just wouldn't be true. In fact, when we started, GitLab wasn't even our only SCM, and GitLab CI was still being developed as a community project. But I think truthfully, that's part of what makes our story and, in particular, GitLab's part in it, all the more compelling. It came together organically. Like I'm sure it is or was with all of you, at the very beginning of our transformation, there were some pretty obvious goals that we knew we wanted to accomplish, some examples of which were to implement modern development practices and architectures, along with widespread adoption of CI CD. We wanted to improve cycle time and deliver features faster while increasing the stability of our apps and platform so that we could continue to deliver the best for our customers. Great goal, right? The real questions, though, are around how to accomplish that. Well, being able to do some of that is made easier when you're leveraging cloud products and services. Here at Northwestern Mutual, we ended up choosing AWS as our public cloud service. Along with that, we purchased a fintech startup called LearnVest, who was already heavily experienced with AWS. Here in Milwaukee, though, a small group of engineers were assembled to start working in AWS and begin integrating NM services with LearnVest products. This newly formed team had the directive and mission, really, of creating modern apps with modern architectures via modern practices. Starting to sound a little familiar, right? So with that in mind, this group needed their own SCM and build tools for this brand new AWS environment they created. As not everything we were currently using got us to where we wanted to go. GitLab was obviously selected. In no small part, one of the big draws we had to GitLab was that it was an open source product. If we needed a fix, we could just do it and submit the merge request, which admittedly back then was even easier as citizens told me, there were only about 10 employees at GitLab when we started using their products. It was also a cloud native product and GitLab was already pushing a dockerized version of the app to Docker Hub. We'll touch on that a little bit more in a minute. So at the time, deploying GitLab into the cloud had a much lower barrier to entry, which allowed us to get up and running right away. Along with everything else, we wanted to take an everything is code approach. The benefits of everything is code are pretty obvious. You're always able to recreate your environments and apps, etc. to the exact state they were at that commit. Config drift starts to become a thing of the past and you start rehydrating rather than patching, which we all know hatching is one of the most nerve-wrecking things you can do. Even more so when the possibility of config drift comes into play and your lower levels may not be set up the same as Prod anymore. Now, this small concept of everything is code has really evolved into a mantra, particularly within my team, but also the enterprise as a whole. We try to have everything done as code. We found that it's helped us improve the stability and reliability of our products. Having the confidence to know that everything matches between production and your lower level environments, including your local development environment in most cases, really helps you root out issues before they become a problem so that you don't have that oh no kind of moment when you deploy to production. And even if there is an issue that needs investigating, you can hammer away at it at your test environments or even your local because you know they're set up exactly the same. So in support of all this, we started leveraging a mix of tools both old and new, or at least newer at the time. We were all in on Docker and Kubernetes and with the everything is code mantra, some of the engineers created a very clever way to do sort of GitOps, really long before GitOps was even a thing. Kind of high level, it leveraged Ansible to pull from a repo that was considered the source of truth for an environment. That's kind of the million foot view, but you kind of get the idea. We even ran GitLab and Kubernetes for a while. In fact, my first presentation at the inaugural GitLab commit was about just that. Though word of warning to any of you who might go back to try to find that talk, I do not recommend implementing it the way I did back then and describe. It was all done long before GitLab really officially supported deploying to Kubernetes in an HA configuration, which was what I was trying to achieve. I would highly recommend checking out GitLab's Helm charts if you want to go the Kubernetes route. It is light years better than what I got working all those years ago. All right, so while using Ansible for our deploy tool has been consistent, other parts and tools were in a much more fluid state, particularly our build tool. That is until GitLab CI officially got brought into the fold and was no longer just a community project. Pretty much immediately, it was requested of us to provision some GitLab runners. And once they were live, oh man, the developers absolutely loved it. Ironically, the day before that asked to stand up those runners, we had just finished migrating everyone from Team City to Jenkins. And no, that's not a joke. In fact, it may actually have been the same afternoon that we finished that work. But there was tons of enthusiasm for GitLab CI. The developers loved how flexible it was and that it gave them the freedom to do whatever they needed to do to get their job done without having to be dependent on another team to make the changes and updates that they needed. Our team loved how easy they were to maintain. Nothing really for us to do other than upgrade the version when we upgrade GitLab. We didn't have to worry about a suite of third-party plugins anymore and ensuring that they were compatible with the new version. We also didn't have to worry about double-checking them all for CVEs or security coming and tapping us on the shoulder because they found one. So naturally, we transitioned to solely using GitLab CI and shut down our Jenkins instance, which even though we had put in all that hard work and got everything stood up in everything as code mantra, we also weren't heartbroken over moving away from it either. Now, that's not to say all Jenkins instances were shut down for Northwestern Mutual, but rather our small group at the time. So now with the floodgates open for this easy-to-use and configure CI pipeline, some developers started looking for ways to solve other pain points in the CI process. Now, I want to stop for just a quick sec, and I feel I should mention that at this point, there were only two engineers admin in GitLab and building out and maturing the other build tools in this AWS environment, all while trying to support the day-to-day activities of about 500 developers. I just mentioned that to help give a little context as to why the developers were helping automate the other pain points of the CI CD process. But really, it actually also helps illustrate a great point. And this is something that we still do today. Because of how easy GitLab CI is to use, we're able to inner-source updates and new features for pipelines and automation, the same way you do for internally developed software packages. We take those same fundamentals and apply it to CI CD. I know it kind of sounds like a small and really rather trivial thing, but it's one of those things that allow you to get just 1% better every day. And over time, that really adds up to just enormous improvements. So getting back to automating other pain points, we all know that GitLab or Ansible is really good at some things, particularly with configuration management and day-to-ops-type tasks. It's not really what we were looking for from an infra-provisioning standpoint, though. So with the flexibility to iterate quickly with GitLab CI and Docker, someone went off on their own to find a solution. That's when at Northwestern Mutual, the love story of GitLab CI and Terraform began. I'm sure many of you here use Terraform or are at very least familiar with it. So I'm not going to go through all the positives that come from it, but you can see how it obviously ties into our theme of everything as code. So here we are, things are starting to hum along in our AWS space. All the benefits of everything as code were proving themselves out to be true. And the prospect of what could be by expanding our offerings to the rest of the enterprise, frankly, were tremendous. So we started to spread the goodness. But at this point, GitLab still wasn't the only game in town here at Northwestern Mutual. There was a competing GitHub installation that everyone not in the AWS space was using, which was a very significant percentage of the developer community. And to be completely truthful, it wasn't always clear which direction the enterprise would ultimately go. All we could do was continue to mature the offerings we had with GitLab, GitLab CI, and the rest of our cloud-based tools. Now, obviously, since I'm here today talking about GitLab being one of the foundational parts of our digital transformation, GitLab won out and became the enterprise SCM. After that call was made official, though, all the repos in our internal GitHub were moved to GitLab. But along with that, teams started migrating en masse to GitLab CI, eschewing Jenkins. Moving off of Jenkins, though, was by no means a requirement that we had to onboard to our GitLab. Heck, we didn't even bring it up with Teams. We were just happy to only be managing one SCM, and we were going to talk about Jenkins with them at a later date. But Teams moved to GitLab CI on their own, especially as the includes functionality was now live, and we had CI templates readily available for them to consume. As it stands now, GitLab CI represents 97% of all pipeline jobs here at Northwestern Mutual, which is about 45,000 jobs per day. So I would like to talk about the includes for a second, though, and really stress how important these are. Something you'll really want to avoid is everyone just going their own way, creating all their own includes files, which are just used by their team. This will put you in a very difficult spot when it comes to things like migrating to different tools, adding new functionality to the pipeline, ensuring updates are picked up, handling any break changes that might come from a major version change in GitLab, with the example of GitLab 14 that was just released. Something else it does is it forces developers to learn where all these files are and what's in the pipeline for that team every time they move to a new group. These are all burdens you'll want to try and avoid, or at the very least minimize as much as possible. And while there's always going to be some give and take, sometimes operability needs to be prioritized over extensibility. And that's okay. All right, so yay, job's done, right? Time to bust out the champagne, cue the music. Everyone's moved over to GitLab. They're loving GitLab, CI and whatnot. Well, let's pause on that for a moment. So we're actually just getting to some of the good stuff that we've been able to do with GitLab here at Northwestern Mutual. Now that everyone's in one SCM, we have a wealth of information with a robust API at hand that we can start working to uplift the organization, not just the Greenfield projects. Like I mentioned, we're a large company that's 164 years old. We have groups at all stages of the digital transformation. So with all the info we're able to get out of GitLab, my sister team created a product to not only improve our DevOps maturity, but also gamify it. Nicole Shulton, Bobby Wenzler actually presented on this at last year's GitLab Committee, and I really recommend going back and checking it out. This app they created was able to leverage GitLab and help implement better DevOps practices with various tiers, giving teams kudos and perks, even including not needing to go to the change advisory board anymore. Beyond that, they even created an easy button for teams to get automatically into the first tier. So again, please go check out their talk, which goes into a lot more details on the how, as well as talk about other reporting and dashboarding that we do to help make things better here. So as we've been going through this process, a reoccurring theme that I'm sure everyone listening right now is keenly aware of is security and shifting left. Like for all of you, it's extremely important to us as well, which I think goes without saying. How to decrease attack surfaces and continue to improve your security posture. It's all tied back into our everything as code mantra though, has allowed us to better leverage GitLab through a mix of our own tools, like Secrets Detector, which prevents people from committing secrets into their code in the first place. My director Ravi Devonini and cohort Michael Pereira have done several presentations on this in all of its iterations, both here at previous GitLab commits and at other conferences as well. I would recommend going and checking them out to see how we are using GitLab to prevent secrets from ever even getting committed into the source code. We've also created other tools to automatically check that you have all the required security scanning jobs in your pipeline. It also checks that you're not trying to cheat the system and have the job set to allow failure. But beyond that and other in-house developed tools, even something as simple as leveraging Renovate to do more than just keep your package dependencies up to date, you can use it to keep your GitLab CI YAML and includes files automatically up to date to pull in new or updated security changes in the pipeline. Heck, you can do it for modules in your Terraform configs as well. Now, even when changes need to be made that require a little more finesse, like say migrating from one artifact repository to another, for example, we leverage GitLab's API via scripts to automatically find the repos for specific languages that we want to target for that stage of the migration. We're able to create a branch, update the code for the development teams, and then submit the merge request back for them to review. But sending a merge request on its own is really only so helpful, especially at an enterprise level. Thankfully, GitLab's given us a pretty easy solution to that too. Again, leveraging the APIs, we can easily find out the status of the merge request that we provided, you know, is it open, merged, closed, etc. As well as double check if that team just added the changes to a branch they were already working on and merged it in. No longer needing the merge request that we had sent them, we're able to create dashboards and reports and then have targeted messaging to the teams so that they know exactly which repos they own that still need to be addressed. And because who doesn't like a good competition, something you can always do is, you know, maybe add a prize for being the first group to finish. Now, that's just one quick example of the reporting you can do with the huge trove of information that's available to you from GitLab. But some other things you can do as well are, you know, being able to monitor and report on your Dora metrics, you know, deployment frequency, lead time for changes, change failure rates, time to restore services. You can even get insights into other metrics to help improve the DevOps maturity of your organization, ensuring teams are using best practices with their repos and pipelines. And those that aren't, well, you can easily see who they are and reach out to them to help them along. They may and likely have questions and really just want to get better themselves. You can pull out security related information to again ensure best practices, making sure teams are using the right scanning tools and using them correctly. And if not, again, it's super easy to reach out to them and help. So let's just take a step back here for a second. In just the short time of this talk so far, we've gone from literally ground zero, green field starting from scratch, iterating through some build tools until going all in on GitLab CI. Having the ease of use and developer friendly nature of GitLab go from supporting a team of 50 to supplanting multiple other SVMs and build tools, supporting thousands of engineers and now running 45,000 plus jobs a day. We're talking about teams with such mature DevOps practices that they can skip going to change advisory boards now, deploy whenever and as frequently as they want, adding an automation to keep everything safe and secure by default, preventing accidents from happening and secrets from being committed, preventing security scanning from being bypassed or ignored and ensuring the right jobs are being run. We're talking about tools that leverage GitLab to automate and gamify ways of increasing the DevOps maturity across an enterprise of thousands of engineers with text stacks ranging from what most would call legacy as well as mainframe to those with micro apps and microservices running in Kubernetes or via serverless, automating mass migrations of tens of thousands of repos from using one artifact repository to another, instilling best practices and delivering features faster, all while improving the resiliency of our apps and platform, creating reports and dashboards to prove we're moving in the right direction because let's be honest, it's all well and good to say you have the capabilities but unless you can report on it and prove it, it might as well be vaporware. It's all happened in the blink of an eye, didn't it? You might not even notice the transition but isn't that what you'd expect when something takes hold organically? Now, GitLab's not the only piece in the puzzle that's helped us along our way but they did create a wonderful foundation for us to create something greater than just the celibates parts and with that, I'd like to thank you all for joining me today and listening to our story as well as throwing out the obligatory we're hiring. So please come check us out at northwestermutual.com slash careers. Thank you, everyone. Have a great day and enjoy the rest of GitLab Commit.