 We talked about how the CI pipeline is akin to an assembly line. Securing that software factory involves more than traditional application security testing and basic access controls. A single platform with end-to-end controls and visibility can greatly simplify the task. A common compliance control that is becoming more and more important is code signing. You can sign the code with GitLab, but if you want to go a bit further, Venify will show you how. In this next session, you'll learn how to certify your code and artifacts with our technology partner, Venify. Eddie Glenn and Laurent Domenich will discuss how applying these added controls can better protect your customers from a software supply chain attack. Hi everyone, welcome to today's session where we're going to be talking about how code signing can be used to help stop software supply chain attacks. My name's Eddie Glenn. I'm a Senior Product Marketing Manager at Venify. I've been involved in software development for about 30 years now, started off my career writing safety critical software and then moved into DevOps. And most recently, I've been helping customers secure their applications. And I'm really excited to be joined here today by Laurent, who's a principal software architect at Venify. And Laurent, can you say a few words about yourself, please? Hi, I'm Laurent Domenich. I'm Principal Architect at Venify. I'm responsible for the design of the infrastructure at Sales or Cloud Product. I'm a software developer for about 25 years now, initially developing enterprise and integration products. And for the last five years, I've been working in cybersecurity software. Thanks a lot, Laurent. Let's get started. So software supply chain attacks, they're on the rise. I mean, we've seen SolarWinds in the news recently, CodeCub in the news. And if we just go back just a few years to 2019, many of you may not have heard about LinkOS. They're a small Ukrainian company that was responsible for about $10 billion of worldwide damages when hackers broke into their software and infected companies like Molar Marisk, who's a global shipping conglomerate, as well as companies like Merck Pharmaceuticals. So these software supply chain attacks are a really serious business. And as a software developer, you guys really need to keep this in mind. And that's really the whole point of our presentation today, is to talk about steps that you can take to help prevent these kind of attacks. But besides what happened at LinkOS, if we looked at just the damage from SolarWinds, estimated that it's impacted about 18,000 of their customers. And several of their customers were very large, notable U.S. government agencies and it's been reported that the U.S. government has set aside about $500 billion to handle the damages from that particular attack that was done through SolarWinds. So definitely, this is a pretty high consequence type of attacks. And as software developers, it's really important to take every step possible to help prevent these kind of attacks. So before we get started, let's talk about just your software releasing. And in the old days, it was pretty simple. You wrote software, you pushed it out to your customers. In the day of the age of the Internet, things started to change and attackers would literally try to insert malware into the code that was released and then that would infect customers with malware. And industry responded by introducing something called code signing, where a security team in a company would digitally sign a piece of software and that would have a couple of consequences. First, it would ensure and it would protect your customers. It would allow them to know that that software did come from your company and that it wasn't someone else impersonating you. And then secondly, it would ensure your customers would know that nothing has been done to tamper with that software. So it would prevent malware from being inserted into it. So basically, you know, we would code sign, release software once it got out to your customer base. And in fact, code signing was so effective that it caused hackers to change how they attacked companies. And we talked about this as shifting left. So instead of trying to attack the final piece of software that's been produced and already signed, hackers are now starting to target the software build environments. So the actual build process. And this has had a lot of impact on software development. And as a seasoned software developer, Laurent, what are some of your key takeaways from these attacks and what should software developers be concerned about? The application security landscape is constantly evolving. The focus was initially on the edge of the application. The security tester was trying to find vulnerabilities in the web interfaces and APIs. Then he shifted completely left, you know, trying to find vulnerabilities in the source code and start part C dependencies with static analysis, scanners, dependency trackers. Nowadays, these tools are completely integrated with developer day-to-day workflow. But with the recent attacks you mentioned, it's clear that you have shifted again this time focusing on continuous integration and continuous development pipeline. This is where we need to pay a focus or attention. One of the things that's different than when I first started writing software is that today's software is a lot more complex. And it used to be when I started off writing software, we were responsible for writing all the code. That's not how it is today. And if we just look at any type of really software, we've obviously got the parts that were responsible for writing. But then a lot of it is coming from open source code. It's coming from third-party libraries. And then we look at our build environments. We're using lots of tools that are coming from third-party vendors, either SAS kind of tools or they might be on-premise tools. But they're responsible for the software development, the software infrastructure testing environments, and so forth. There's also now build and infrastructure recipes. So, you know, we're dealing with infrastructure as code as infrastructure and build environments as code. And we've got build scripts to contend with. And then even our deployed environments are no longer physical environments necessarily. They could be things like containers, Kubernetes and Docker. And this is all code and as a result, that could be potentially tampered with. So we're now looking at not just a single thing that's being attacked, but when we look at the software build pipeline, there are lots of opportunities for hackers to start to attack. So, Laurent, one of the questions that I have for you is, you know, we talked about just a few minutes ago what traditional InfoSec used to do, you know, and they would sign the release software right before it got deployed out to customers. How's that changing and what's the impact on software development teams? Nowadays, development teams are responsible not only for writing code, but also for testing and building it. Security needs to be included in that. And developers must work with those InfoSec, those security engineers to protect all these steps. Good points. Thank you. So, one of the things that Benify did after it became public that we had a pretty large breach of SolarWinds is that we reached out to some of our partners in the security and software ecosystems and we started working together on, okay, what are some of the best practices should software organizations undertake to help prevent these kind of attacks from happening again. And Laurent, I know that you are a big part of this project and it's a project that's still ongoing. So, I'd like for us to spend just a few minutes and you tell everyone what's been involved with this blueprint and that's what Benify has been calling it as a blueprint for preventing software pipeline, build pipeline attacks. So, yeah, let's spend just a few minutes and talk about that. Yeah, initially this blueprint was built for securing our own Benify cloud pipeline. We then worked with other security vendors to make it a recipe that can just not be applied to all pipeline but to most software pipelines. This blueprint is now open source and we will come contributions to it. It's redesigned around the set of control based on lessons learned from recent industries, security incidents, but also from traditional methods to protect production systems. For example, in this diagram, you have control four. It's your traditionalist privilege control. Your build server, they need to access your source code. For that, they need to have, they should have no more privilege than what it's required. But most of the time, read-only access should be enough. Another example, if your build server may need to create tags in your source code. In this case, you should just grant this level of privilege and try to avoid at all time food write access. Another example, it's the control 15, which actually is not even in a diagram, it's an additional control in the blueprint. And this one is about applying container technology to pipelines. Every step in your pipeline should run in a container. Once the step is finished, your container is discarded. That could prevent malicious long running process like we saw in SolarWinds. Even if a hacker is able to install a process in your build, that process will only run within a container, so it will be limited to what it can access in your infrastructure. And on top of that, it will be very short lived. Another interesting control is control five. This one recommends that your retrieve dependency is only from local registries, in your infrastructure being on-premises on the cloud. It means that your build won't be able to put dependency from public sources. That way, you can control the dependencies that you're putting. They only come from upstream registry that are known to have strong security policy and traceability. The last type of control in this diagram is all about fingerprinting and concerning. We're going to talk more about that. Great. So this is really useful information. And I just wanted to point out the URL that's on the bottom of the screen here and Venify and our partners really encourage you to go out and visit the GitHub repository that contains the information for this blueprint and contribute to it. Because we know that as hackers change their attack methods that we have to keep up, and everyone in the industry has to keep up. So we expect this to be a living blueprint that can constantly be updated to stay ahead of what's happening. So one of the things that you mentioned is that code signing is a key control throughout the build pipeline. And I think as we think about this, and I think I heard you say that before you move from one part of the pipeline to the next, if you check something into the repository you should run that through a security scan, then sign it, and then at the other end of that particular component is that you would check to make sure that that artifact had been properly signed. So I guess the question I have is, what kinds of security checks should one run for when they're checking something into a repository and right before they go to sign that particular artifact? Start with your static analysis tool. That scan for vulnerabilities like SQL injection or Cosa scripting. And as modern software realize more and more you should also be using a dependency tracker that will analyze all the dependencies that you have in your code and check for if they have dependencies that have non-vunabilities. Finally, don't forget to run your quality and style inter. Those are very useful as well. Interesting. So I think one of the key takeaways here is that in the old days we might sign code once, every time we release it, but now that we're looking at signing different kinds of artifacts throughout the build process and if people are following DevOps then likely that build process is happening many times a day. So that's going to mean an increase in the need for doing code signing. And I think that's a good segue into this next topic and that is, what are some of the challenges that developers have with code signing? And this is really important for us to think about especially if we're expecting our developers to do more code signing to protect the software pipeline. So I'd say that the biggest problem is that I certainly didn't have this when I was doing software development. That is PKI experience and code signing is about PKI. So if we're expecting developers to understand PKI and certificate management around code signing certificates, they just don't necessarily have that level of expertise. So that kind of leads to the next bullet point here and that is well if the developers don't have it and then InfoSec team or the security team needs to do it for the developers are there problems associated with that? And the answer is yes. Usually the security team operates in a much different time frequency than development teams. In the development team we're talking about frequent releases that are happening all the time. Security teams are all about that slow things down, make sure the process is followed and policies are followed Did you fill out this form? Did you get this approval? That's where things. So these are often at odds with the development team. So what happens as a result of that? Developers usually take things into their own hands. So they'll figure out we're just going to do this ourselves, we're not going to involve the security team. And that leads to a whole different set of problems. The first one is private code signing keys and we didn't really get into a lot of detail about code signing so if you don't have that background that's probably okay but there is this one aspect of code signing and that is it requires a key. So it's a secret that no one should have access to it. If someone does get access to it then that's how they're able to code sign in your name. So these private keys need to be protected and secured. And frequently what we find when we talk to our customers is that their developers put these in all kinds of interesting places. Usually it's places that are convenient for them. So if they need it during an automated build, where's that key located? It's located on that automated build server. Maybe it's on their laptop. One of the examples that I didn't talk about earlier was a computer company called ASIS and they had a similar kind of software supply chain attack a few years ago and it was discovered that their keys were actually their code signing keys were stored on their web update server. So at hacker broken of the web update server they found those keys on that server they were able to insert malware, sign it with that private key and that got pushed out to their customer base. Other issues that are kind of related to the fact that software developers frequently don't have that PKI experiences that they may request a code signing certificate from a certificate authority that's not authorized by the companyers out of policy or the certificate was configured incorrectly and these calls different kinds of problems for the software. They might be using wrong configurations or doing things like not using a time server and that's probably a topic for another conversation but a time server is a parameter for code signing that ensures that the code is going to continue to run even if the code signing certificate expires. So these are just lots of problems that developers have and as many of you think about this market problem as we started to address it with the solution we really focused on what do developers need? So before I move on I'd like to hear Laurent some of your experiences I'm curious, have you experienced these kind of things with the developer teams that you work with? In the past I completely avoided code signing tools because I found it difficult to use them. You need to create encryption key pairs as you mentioned you install them in a private key in a secure way which is what's a secure way and then publish the key to registry being internal sometimes public registries and then you need to configure your entire tool chain to use that that's not easy as well and on top of that you need to rotate those keys every few months for practicing for security Honestly most developers they are not encryption experts and they should not be the software should be very simple to use Good points there and it's not only developers that have issues with this it's also the InfoSec team and I know that all of us that are developers at heart probably think a lot about what the InfoSec team goes through but if you put yourself in their shoes they're dealing with things where they're still responsible for company security and if they know all of these things are happening on the development side it's really hard for them to understand what the risks are so they're concerned about things like private key sprawl where are those private keys at that are being used throughout the software development process do they have the ability to enforce code signing certificate and key policy they're experts in security so they know what hash algorithms should be used they know which certificate authorities should be used and they really are out of control they really need to be able to enforce those kinds of things they also may not have insight into how different artifacts are being signed and again just to reiterate this last point but they're still responsible for the company's security so this really creates a problem for an organization no matter if you're on the development team or you're on the InfoSec team or you're the CEO of the company these things need to fit together and as I'm going to talk about this is how Vennify is addressing this particular issue around code sign so we have a solution called Vennify code sign protect and basically we looked at three objectives we want it to be extremely fast for developers we know that you're doing lots of releases and you're signing lots of different artifacts and you can't slow down those automated builds or those automated processes that you have it has to be extremely easy for you so it means that you don't have to go in and modify a bunch of scripts or you don't have to learn new tools or learn anything about encryption or certificates or anything like that it just out of the box it needs to be extremely simple for you it also needs to address some of those problems that I just talked about with the security team it needs to provide them things like visibility it needs to provide them a way to enforce policy and process without encumbering the development teams and then it also has to be designed to be able to protect software or supply chain attacks which means it has to be flexible it has to work within lots of different kinds of development environments it has to be able to work on signing different kinds of artifacts be it source code or libraries or executables or you name it if it's a piece of software that gets executed it needs to be able to support the signing of that particular kind of software so what are some of the things that Codesign Protect has first and foremost it's integrated with the tools that developers use so as an example GitLab from right within the GitLab environment it's very easy to just have a build pipeline created that does that signing for you so you don't have to think about it you might be using more tool more environments besides just GitLab GitHub or Jenkins or you might be developing Java or Windows executables or Linux executables or iOS applications or Android applications Codesign Protect supports all those environments and it provides those same kind of features across all those different environments so your Infosec team has visibility into all the Codesign operations that happen no matter if it's source code that's being signed or some other kind of intermediate artifact like a build script or an executable and as a development tool we took the approach of it needs to be API ready and have a full API that allows complete control and automation over what Codesign Protect does that ensures that it can be integrated into any kind of software development pipeline or process and then from a development benefit perspective it basically eliminates the need for you to have to know how to access a key or which key should be used or which certificate should be used that's all handled behind the scenes and just the fact that you're logged into a specific machine Codesign Protect will know what keys you have access to and which should be used for this particular purpose and then last and this is really key is that there is never a need for a developer to actually have access to that private Codesign key so this really helps with the security of your Codesign infrastructure those keys are always protected in some protected trusted storage either in protected by hardware encryption or maybe software encryption but the point is that it doesn't matter if you're doing a build by hand and you're typing a manual command command line or if it's part of your build pipeline those keys never have to leave that's that encrypted storage so they're going to stay very secure and away from hackers being able to use so I really enjoy talking to you today, Laurent I'm really happy that you're able to join us I just want to emphasize a couple of highlights things that you guys should take away from the session first and foremost attackers are shifting left they know that Codesign has been effective and it protects that final executable so they're going after your build environment they're going after you develop software and you need to be prepared for that you need to take measures for that Laurent talked about with our blueprint for protecting the software supply chain a lot of control measures are required and they all need to be handled by the software development teams we talked about how Codesign is effective and it could be effective throughout that build pipeline you just need to sign intermediate artifacts along the way and you need a tool and a platform that's going to be able to support that without causing any additional burden on your software development teams so with that I want to thank you for joining us and Laurent do you have any final words of wisdom for the audience today yeah thank you it did probably me those are very serious attacks as developers we need to understand other two places and we need to learn from them we need to be involved with securing applications not just like that to the security expert it's work with them with the security expert in your organization to protect the software thanks a lot Laurent final thing for everyone we do have this really interesting artifact that the Venify Threaded Analyst created it's basically our our research into what happened to SolarWinds it's a really interesting read it's a really short white paper I suggest that you download it after the session and I hope you enjoy the rest of the summit