 At GitLab, we're innovating together. Partnerships are important to complement our capabilities. I'll be highlighting several technology partners today and tomorrow. The first partner I'd like to showcase is Ancor, and I'd like to thank them for their sponsorship of this year's Commit User Conference. Be sure to check out their demo as well and learn how Ancor complements GitLab capabilities to provide solutions to US federal customers and many others. Hayden Smith and James Peterson with Ancor are going to share DevSecOps lessons learned from working with a large federal agency that can be applied to any organization. Welcome, Ancor. Hi, everyone. Welcome to our session today on how DevSecOps lessons from the DOD can help any organization. We'll be talking to you today. It's me and my friend James. I'm Hayden. I'm a senior DevSecOps engineer with Ancor and handle a lot of projects alongside James and the public sector business. Yeah, and I'm James Peterson. I'm also a senior DevSecOps engineer at Ancor at the last year doing SRE work with Platform One, specifically on the Iron Bank project, supporting Ancor as well as some other facets of their DevSecOps platform. Today on our agenda, we're going to be briefly going over the challenges that the DOD faced moving forward, and I believe a lot of these challenges will be familiar to your organization. If you haven't seen these challenges before, the odds are you will see them in the near future. We'll be talking about the DOD best practices and solutions we have for these challenges, as well as some elements of a strong security foundation that can be adopted by your organization. So just talking about the DOD and DevSecOps, Ancor exists for these three main use cases and they're being used all over the DOD right now. So, you know, sometimes people are going out and building an image marketplace, so they're taking containerized applications and building a secure marketplace and want to validate that these images and containers are up to a certain security and compliance standard. We also exist to secure products and applications, so to continuously protect those internal applications your teams are building from security risk, whether it be external or internal. Also ensuring security and compliance of the software you deliver. So for the, you know, the DOD, this is very, very important because they have such a strong compliance baseline that they have to meet, but you could quickly see how this can correlate to any industry or any vertical here. Today we'll also be introducing Platform One as a case study. Again, a lot of these challenges are very broad, but a lot of them were experienced firsthand by both myself and James as we worked on Platform One. So we'll be referencing Platform One throughout the presentation. Platform One allows users to adopt DevSecOps and it serves as a DevSecOps software factory for the entire Department of Defense. All right, so challenge number one. This is something that's been in the news quite a bit lately, which is software supply chain attacks, whether it's SolarWinds, CodeCov, Kaseya was a recent one, so that one happened just a week or two ago. All these attacks are done by very persistent threat actors that are very committed and very creative in how they execute these attacks. So we're seeing this, these attacks shift further and further to the left, penetrating further down in the software supply chain, where they can then spread quicker into more organizations very easily, right? So this is something that's going to continue to persist. And when we talk about how this relates to the DoD, this is something that is definitely front of mind because the types of actors that are behind these attacks are the same kinds of actors that we have to protect ourselves against, right? Not only private sector, but also public sector as well. And these are actors they're familiar with. So this is definitely a type of attack that is very concerning and something that we had to take on as a challenge when we talk about the DoD and all of our kind of public sector business out there. And our recent report, Anchor went out and published our own report. We found that 64% of enterprises were impacted by a software supply chain attack in the last year, right? And based on some of those logos we saw on the previous slide, right? CodeCov, SolarWinds, Kaseya. CodeCov is an interesting one because it specifically deals with Docker files and images and abusing images and things like that. And then of those, you know, software supply chain attacks in the last 12 months, we determined, you know, of those respondents, 15% said they had a significant impact, 20% said they had a moderate impact, around 30% with a minor impact. So that's really important. And you could go grab that report down from our website and take a deeper dive in that. So our second challenge at the DoD is also that in the public sector in general is that we're dealing a lot with containerized technology, right? So packaging software has become easier than ever with the onset of containers. That's great because you can package up all your application code, everything that application needs to run, right? So your licenses, your code artifacts, anything like that. But there are also a huge security risk because you have so many different files, so many different dependencies, packages, secrets being pulled in. So you need a way to look inside that and detect those security issues and compliance issues that may be present in that containerized component. So when you look at combining challenges one and two and really layering that on top of each other, you get this super oversimplified version, right? So on the right-hand side of your screen, you have your cloud-native application. So each of those boxes represents a container, right? Specifically if we're talking about a modern kind of application, you're typically going to have more than one container that that application needs to run. And each of those kind of containers is sourced and built in a different way, right? So you have your vendors, your public repos, your private repos that are all being pulled from in order to contribute code and pull code in to create that image, right? That will eventually go on and be a running container. But each of those public repos, private repos, or that vendor may have a different standard on what they think security and compliance means for building secure software. But then when we oversimplify it, it gets even crazier, right? So you look at that vendor, where you source from, or that public repo you pulled from. And they may be pulling downstream from somewhere else, right? And they may have a different standard on security and compliance with their software, which can further complicate the issue. So you need to introduce a choke point there where you can then inspect that software, make sure it's up to standard and up to par with your security and compliance baselines, check for different threats and attacks and things like that within your software before you have that final application you see there on your right. Then when we talk about Platform One and the DoD, now our third challenge is really automating this. So we want to protect against software supply chain attacks. We want to adopt and heavily use containerized and other cloud-native technologies. But we want to do this in a fully automated process so that way we could speed up delivery of software updates and new software tooling to the warfighter, right? So when you take a look at kind of the modern CICD process, you have all those different workloads that you're handling and then you have your tool chain below that. So you have all your tools that are a part of handling those workloads, right? So you have your sourcing, develop, build, test, deploy, and run stage. And at each of these stages, there's always a new attack that can occur, right? So we have four listed, but for each of those four, there's probably 10 to 15 that you could easily add to each box. So that's something that's really important to keep in mind is that there's really a ton of attacks that could introduce themselves at any point in this automation process, whether it's an insider attack by someone that is trusted or whether it's a simple typosquadding attack from something that is upstream, right? Say you're pulling in a different package or dependency that is vulnerable. That's something you definitely want to monitor for. And then taking a look at challenge number three, even further, and digging into kind of what we were thinking about at platform one as we started to think about, how do we protect ourselves from software supply chain attacks? How do we automate this whole flow and automate contributing containerized technology and using containerized technology? You have to ask yourself important questions, right? So who do we trust? What components of the software should we trust? How do we go about inspecting the software or things like that? Who owns what? So who's going to own the security model within this? That's part of DevSecOps, right? It's actually re-establishing the collaboration and re-establishing how stakeholders interact with each other from a security standpoint. Where and when and what do we scan is a big portion of it. So do we scan this artifact once, twice? Do we scan it once we see it for the first time and then we scan it once our developer has gone through and added more of their application or custom specific code on top of that? Those are questions that you just constantly have to answer. And then how do we protect a very sensitive things? Secrets is a great example. So how do we prevent secrets from being specified in a Dockerfile? How do we detect that? How do we do secrets from leaking in other places, right? Other places in your CI pipeline. So that's just a good example of protecting sensitive information but this could also be sensitive files or things like that. How do you prevent unauthorized disclosure of sensitive files and that build process, right? And how do we make this flexible? So how do we make a pipeline that can handle all these challenges that can handle, that won't compromise our security baseline that will leverage containerized technology and cloud native tooling? And how do we make it flexible enough to support all the automation that needs to take place? So you have all these different steps that need to take place as a part of our platform one pipeline about 17 or so different jobs that we configured all in GitLab using eight to 10 different security tools. And the best part about this is that it's a plug and play model, right? So we're taking different security tools and responding to specific threats that we see out there. And it's easy because we have a tool like GitLab that makes it easy to take that pipeline and be able to put tools at different parts of that pipeline. So being able to drag and drop and put tools where they need to be very easily makes it very easy for us to go out and create a secure pipeline that kind of answers these three challenges. I'm now going to hand it off to James and he is going to discuss some of the questions you should ask yourself when going about building secure pipelines and answering these three challenges. Thanks, Hayden. So I want to get into some of the core things you need when selecting your tools to build your pipeline when you're actually building your DevSecOps pipeline and some of the best practices that we actually use. We implement every day and we help implement at DoD Platform One. So first, some of the questions to ask yourself when you're actually selecting your tools. The first one being visibility. So what is it that's actually getting built into your container, into your production software? There is an executive order that came out recently mandating an SBOM for any software that the government is going to use. And there's a reason for that. It provides deep insight into the software supply chain that makes up that particular piece of software, that image, that container. It tells you what your dependencies are, where your dependencies are coming from and who's actually making changes to that software that you are using in your production system. The next is inspection. So what is it? So you've collected that information. Everything that goes into this software and now you need to actually inspect it. So checking for things such as what's changed between builds. That's a very important one. What's actually changing inside your software is someone trying to sneak something in there. Finding vulnerabilities or malware that might be hiding in your software. And then also looking for any sort of anomalous activity that's going on inside your software, whether that be one of the two previous topics. And then we go into policy enforcement. So should it be there? So doing things, perhaps some of the vulnerabilities that are there, it's okay because they don't really affect your software. Security has signed off on them. They're okay with that. Those can be allowed through because, like I said, maybe it doesn't affect your software. And then also you need to perhaps adhere to some sort of compliance standard and encoding policy and doing so so that it's repeatable and it's enforceable is very important because humans make mistakes. But if you can actually encode that into an automated process, then it really increases your security because it's done so consistently. And then remediation. So how do I actually fix the issues that are surfaced through visibility, inspection, policy enforcement? So what is your plan of action for fixing that vulnerability or making sure that malware isn't there anymore? Things you need to make decisions like prioritizing which vulnerabilities need to be fixed first. So those are some important questions to ask as you move forward with designing your pipeline, designing your whole security infrastructure as you ship software. And some of the work that we've done at the DOD, we've learned quite a few lessons and we've helped implement, we've learned some best practices and taken some of those lessons. So it's really essential to lay a strong foundation for software security. There's many often overlooked practices that play a key role in creating security posture. So some of those include sanitizing files for sensitive information, using a tool like Truffle Hog, something like that integrated with your GitLab environment to comb for secrets exposed in Git history and purging those secrets, making sure that they're not actually pushed into your Git history, things of that nature. And then implementing defense in depth. So verifying software before building it into the container, building containers in a secure way, enforcing security and best practices for a container post-build, and then enforcing policy and controlling deployment of containers so nothing violates the security of what's deployed. And then enforcing security in the deployment environment, something like network policies surrounding ingress and egress. So have security checks and you have security at each stage of the pipeline from dev to build to publish all the way to actually running software. And then verify input packages for your software builds. So don't just install software from anywhere, make sure it's signed, make sure you can verify where it's coming from, what's in it. This goes back to the visibility that's having an S-bomb of... That's why the government wants S-bombs for software that they're consuming because they want to actually see what's inside of there. And then embed your malware scanning. So make sure that you have malware scanning somewhere inside your DevSecOps pipeline and make sure it scans every software artifact that you're building and putting into your production software. And then encode your security controls through policy. So do as much as possible to codify your security. Don't rely on manual checks for consistent security because as humans we can make mistakes. There are things that can come up that maybe we don't recognize, but enforcing code policy through code, through automation, it really helps to enforce that security consistently. And it also allows security to scale. So it's easy to do a manual review for 10 images or so, but once you scale up to thousands of images, you have to hire more people. You have to get more people involved. One person can't handle all of that. So when you can codify it and build it into an automation system, you can actually apply security at scale. And then develop a plan of action for security findings. So this goes back to the remediation. So architect a process for handling security findings early. So we'll involve mitigating the findings or just reporting the findings. You have to make those decisions on how you're actually going to handle the security, especially once you get into production systems or deploying something like something to a Kubernetes cluster. Are you going to block polls of an image because it has new vulnerabilities in it? Well, you're going to take down production systems. There are trade-offs that you have to discuss and make those decisions inside your own organization. And then when you continuously monitor the software, what is the plan of action? So that goes back to, when you have a vulnerability found in production that wasn't there at build time, maybe it's a new vulnerability, something like that. What are you going to do? What is your plan for handling that? Are you going to ideally fix it in dev, push it all the way to prod as quickly as possible? But what is that process? What does that look like? And then use flexible tool sets. So tools like Anchor and GitLab, they have full API coverage, which as a practitioner, I can tell you, is really awesome. So they're very flexible to integrate into your workflow. So you can interact with them however you need, whether it's Jenkins, GitLab, CI, whether you're just running Bash scripts on a VM, there's interacting with GitLab and with Anchor, it's easy. And then it also gives practitioners, someone like myself, somewhere to start without having to build from scratch. And then adhere to industry standards. Industry standards, they're there for a reason. It's a conglomeration of best practices from very knowledgeable people. There are things that are proven to have worked in the industry, there's a reason that they're there. So doing things like security scanning, linting your Docker files, your infrastructure as code, your cube manifests, those are all very important. So next, I really want to talk about our case study. So Platform One, DevSecOps in real life, they're actually doing it. So they have built a pipeline to secure containers and then they publish the Iron Bank Registry One, which is a registry of hardened containers. So how do they do it? What does it look like? Well, for Platform One, they start with untrusted software that is brought in. It's typically rebased on top of a Red Hat Ubi image. They have DistroList, they have Scratch, if your application uses one of those base images. But typically it's built on top of a Red Hat Ubi image because that is the approved base container. So that has gone through the hardening process and it's been approved. And so therefore the security, it's that defense and death. Base containers have been secured. So now you can build things on top of it. And anyways, so everything starts off rebased on top of Red Hat Ubi image. Then you download all the dependencies in the pipeline and you do the verification. So you check the signatures. You scan them for malware. Make sure you don't build anything into your container that you're not expecting to be there. Once they've all been verified, everything is passed into a build job that's actually built on a disconnected build node. So it's a disconnected GitLab runner that we use. We use the Kubernetes executor and have some network policy around our build node or our build pod for our GitLab runner. And so it'll build that software or that container offline so it doesn't do anything like reach out, download something during build time, which is a big no-no. And that's really, that enforces the software that you downloaded previously, all your dependencies are actually what's built into the container and there's nothing that you're not expecting in there. And then once it's been built, it's scanned by multiple security scanners. So it's scanned by OpenScap, Twistlock, and then ourselves, Anchor. The CVE results from all of the scanners and the policy evaluation from Anchor, they're all put into a centralized database where the findings, they go through a review process. So security engineers will actually go in, they'll take a look at the findings, they see whether that CVE or this finding actually affects the container. And then they will go ahead and go mitigate it or what they call justify it. So if it's justified, this vulnerability, maybe like I was saying previously, it doesn't actually affect the software. So it's okay to be there, maybe it's a false positive, something like that. And then after that, after the security investigation is done, it's all collected, all of the information, so the SBOM, all of the findings, all of the justifications, they're all collected together and they're sent up to an approval authority who signs off on each justification and reviews the contents of the container. This is where justifications maybe they're sent back and said, okay, this needs more justification or this actually needs to be mitigated because you are actually vulnerable, things like that. But once all of that has been remediated, the authorizing official actually approves the container, it goes and it lands in the Iron Bank public registry where it's consumable by anyone. So it's public on the internet, so it's consumable by developers, by production systems, anyone who wants to go out and consume a hardened container. So our elements to a strong security foundation, which Platform One really has, it starts off with the culture. So having security training for everyone, I know some people get annoyed by it, but it is actually important, it's very useful. And having ownership, who owns the security for this? Well, everyone owns it, but who is in charge of making sure that there are no holes in this piece, no holes in that piece. And then like I said, really using API first tools. So pick and choose tools that are easy to use, preferably something that has full API coverage and that way integrating with the tool is easy, it's easy for your practitioners, it's easy to use, it's easy to modify your system, you're not stuck in this rigid system that only works one way and you can't change it, you can't evolve it over time. And I like to say, nobody wants to maintain spaghetti infrastructure, so we've heard of spaghetti code. Same can happen with infrastructure, especially as you're connecting tools together. So really having flexible tools that are easy to use, it's very important. And then staying policy centric. So we know fail fast is a thing for dev. So if your build's gonna fail, you want it to fail early in the pipeline or if some check is gonna fail, you want it to fail early, so you're not waiting for 10, 15 minutes before you actually know that your stuff failed and you need to fix it. So same thing goes for security. So start to encode security early in your pipeline and sometimes like runtime checks, you can't do those necessarily early in a pipeline, but there are pieces of those checks that maybe could happen at the build time, at the source code time, just focus on moving security to the left and failing fast when you know that something is in clear violation of security. And then you want to lower your efforts to actually fix those vulnerabilities, remediate containers, things like that. So really take advantage of automation. And then finally, you want to continuously monitor all of your production software. So security, it's not just one and done, it's a continuous process. So you want to monitor your production software for not only breaches, someone trying to hack into your software or your system that you're running, but also monitor it for things like new vulnerabilities and then have a plan of action, like it was talking about, when you do find a new vulnerability in production, what is your plan of action for? Remediating that, is it fix it, build it, push it all the way through to production and deploy it? Is it take down the software, something of that nature? But just discuss it within your own organization. If we do see this, what is our plan of action and come to an agreement with everyone? Thanks for coming to our talk at commit. It was a pleasure speaking with you. Like Hayden said, we have our supply chain security survey. Please check it out. It's some pretty cool data. Also, if you want to try out Anchor, we actually have an integration with GitLab. So if you're current GitLab user, which I'm sure most of you are, it's pretty easy to get up and running with an Anchor scan. You can check it out. You can also check us out on our website. And if you want to check out platform one, I included the website there. It's a pretty cool organization put on by the Air Force. Definitely worth giving it a go. So with that, thank you and everyone. Have a great day.