 All right, everyone. Thanks for coming out. I really appreciate everyone being here. Let's get started. Hey, everyone. Thanks for coming out. Although this is not my first black-at-talk, it is the first time I'm speaking in person. I'm really, really, really excited to be here. Let's get started. One second. Wardrobe mode function. Hold on. All right. Just kidding. Of course. I'm really honored to be here. It's my third time at DEF CON. Unfortunately, I was under the age of 21, so I wasn't able to partake in the shot celebration for speakers, but I was this year. So, a bit of background. My name is Bill Demercupa, and I'm a researcher for the Microsoft Security Response Center. While working full-time, I'm also a full-time student at the Rochester Institute of Technology. I have a relatively diverse background in offensive and defensive security, but my specialization is low-level operating system internals with a focus on Windows. In this talk, we'll be exploring my work into digital signatures, specifically the implementation issues I found with them. We'll start with some background on how digital signatures are validated, approaches for attacking them, which will tie into the systemic fly found and its impact on the broader ecosystem. A quick disclaimer. We'll be discussing several first and third party vulnerabilities in this talk. Everything you see here today is patched. We will not be dropping zero days, unfortunately. All right. So, how are digital signatures validated in the real world? A digital signature is used to verify that a message, document, or software comes from a specific sender and hasn't been altered during transmission. This is done by creating a hash of the message and then encrypting that hash with the sender's private key. A digital certificate, on the other hand, is a digital document issued by a trusted third party, known as a certificate authority or CA. It contains the public key corresponding to the sender's private key, as well as information about the identity of the sender. In order to validate a digital signature, the recipient needs to have the sender's public key. The problem is, if the sender simply sent their public key along with their message, there's no way for the recipient to know for sure that the public key really belongs to the sender. By containing the sender's public key and being digitally signed by a trusted third party CA, a digital certificate confirms that the public key really does belong to that sender. Root certificate authorities are responsible for identity verification of individuals and organizations requesting a certificate. For digital signatures to work in practice, we need these organizations that we can rely on in order to establish a chain of trust for digital certificates. Who verifies the root CA's, they come pre-installed. They're responsible for identity verification, but, for example, in the screenshot below, you can see the Windows trusted read store. These are the authorities that come with the operating system and provide a reference point for digital signature validation. So how is a digital certificate verified? Through the chain of trust. Remember the trusted root CA's that come with your machine? We can verify if a digital certificate is legitimate by checking if it chains up to one of these trusted CA's. In practice, the root CA's verify the intermediate CA's which then verify your end entity or server certificate and allow us to determine the legitimacy of a signed digital message. So let's go through a simplified example of what verifying an executable looks like on Windows under the authentic code specification. First, we need to generate an authentic code hash or digest. We start by hashing the PE headers of the executable. Note that some fields like the security directory entry and checksum are intentionally excluded. Next, we sort every PE section by their file offset in ascending order and hash their contents. If there's any extra data after the PE sections and before the security directory, this is added to the hash as well. Once we calculate the hash, we can extract the encrypted digest from the security directory which contains the authentic code signature. We decrypt and compare this hash with the one we calculated and at this point, we validated that the authentic code signature is valid and the last thing we need to do is verify the certificate and the chain of trust. Does the certificate link to a certificate authority we know about? And that's about it, at least from a high level. Let's talk a little bit about how one can go about attacking digital signatures. So a quick history recap. Since 1996, MD5, the use of the MD5 algorithm has been discouraged due to its inherent weaknesses. For example, in 2004, we saw the first publication of an MD5 collision. Researchers were able to generate two distinct files that resulted in the same MD5 hash. In 2008, the researchers abused known MD5 weaknesses to generate a malicious certificate authority with a valid chain of trust. And using a chosen prefix collision, they were able to create a rogue intermediate CA with the same MD5 hash as an end identity certificate. In other words, something like an SSL certificate. In 2011, we saw a breach of the digital odor CA and in 2012, the flame malware abused chosen prefixes like the 2008 rogue CA attack that let them generate a fake Microsoft CA. So what type of attacks are relevant for digital signatures? Well, we have three broad categories that are generally applicable to a lot more than just signatures. Let's break these down. First, we have memory corruption issues. These are your classic out-of-bounds read, write, or stack overflow vulnerabilities that can often arise from the mishandling of untrusted data. How do we find them? Typically manual or guided analysis as well as fuzzing are good strategies. How do we fix them? Well, one of the best advice I can give is to minimize your attack surface. Limit the code that processes untrusted data. You can also use various mitigations or memory safe programming languages like Rust to substantially reduce your risk to these issues. Next, we have logic flaws. These are highly context-specific issues that vary by application. How do we find them? Manual or guided analysis, these issues require a decent understanding of the application's design and intended usage. For example, look for differences in logic and between a design document and the actual implementation. How do we prevent them? Well, again, minimizing the code that processes the untrusted data is a good place to start. Having a test suite of expected outcomes are another good way to prevent regressions and validate design assumptions. We also have cryptographic flaws. These are different than the implementation issues we just discussed in that they exist in a cryptographic algorithm. How do we find them? Typically this involves mathematical analysis of algorithms to ensure things like pre-image resistance, second pre-image resistance, and strong collision resistance. How do we prevent them? Well, for most, the golden rule is just don't rule your own crypto. Don't deviate from the specification that you're implementing. So writing cryptographic systems is challenging due to their inherent complexity. Cryptographic implementations aren't isolated. They interact with a multitude of other systems, protocols, and software. Patching implementation issues can get complicated as well. For example, like one reason why XYZ, Insecure Future, let's say MD5, is difficult to get rid of is because if you simply prevented its use entirely, you may break legacy systems and applications. Memory corruption is better understood. Logic flaws are much harder to prevent as they are context specific. Let's review the different types of signing certificates and the minimum requirements for obtaining them. First, we have regular SSL certificates used to secure your connection with a web server. At a minimum, you need to prove that you own the domain that you're requesting the certificate for. This can involve adding a DNS record or serving a provided file over HTTP. SMIME certificates are used for securing email communications like SSL certificates. You need to prove that you own the email address that you're requesting a certificate for, which can include the identity of yourself or your organization. We also have code signing certificates. These are used to maintain the integrity of your software. Unlike SMIME certificates, the bar for obtaining one is substantially higher. As we'll soon discuss, you will almost always need to verify your identity or the legitimacy of your organization. Finally, we also have document signing certificates. These are in a bit of a gray area as they're frequently interchangeable with SMIME or SSL certificates and have similar requirements. The requirements for different types of certificates can greatly vary. Let's break down the different types of validation that certificate authorities can perform. With domain validation or DV, you need to prove that you control a given domain. This is a pretty low bar for verification because there's a higher risk of abuse. All you need to do is put up a file on your web server. Organization validation is where we get into the moderate level of verification. During this process, you need to prove the legal and physical existence of your organization. This is the bare minimum for code signing certificates and other sensitive digital use cases. Extended validation is one of the highest levels of validation that a CA can perform. It's everything you need for organization validation but more. For example, you often need to show that your business is legitimate and not a shell company. This can include face-to-face verification. When I started my research into digital signature implementations, the differences between certificate requirements caught my eye. For example, in our context, having a digital signature alone is not sufficient. We not only need to be able to verify that a signature is cryptographically valid but also that it originates from a trusted source. These two diagrams overview the extended validation versus domain validation process. The question I had was, what prevents an attacker from abusing a certificate that has only proven domain ownership for purposes that require a higher level verification, like code signing? Now, let's get into the fun stuff. What defines a certificate's purpose? What distinguishes an SSL certificate from a code signing certificate? Often, it's the extended key usage field present in most end entity certificates. EKUs can specify what a certificate is allowed to be used for. As an example, the image on the right shows this field from an SSL certificate. The server and client authentication usages mean that the certificate is allowed to authenticate to a remote server or client. But what actually verifies these EKUs in practice? When you sign digital data, the utility you use can impose restrictions. For example, if you're signing an executable, you may receive an error if you tried to sign an executable without the code signing EKU present on the certificate you're using. But the restrictions these tools impose are not what matters because they run before the attack is actually performed. As we'll soon see, if I, as an attacker, got an error related to the intended purpose of my certificate, there's nothing stopping me from modifying the tool to bypass this check. So, how do we identify vulnerable implementations that fail to validate a certificate's intended purpose? First, we started by identifying some basic criteria. The Microsoft Security Response Center is interested in protecting the entire ecosystem, not just first party implementations. We ended up looking at a variety of file formats that leveraged digital signatures. Most frequently, this included code signing because of the bare minimum organization validation requirement. For testing, we also generated an SSL certificate, which only required proof of domain control. In the next few slides, we'll review the signing tools relevant to the file formats we're interested in and modify them to remove any client-side checks. Remember, modifying our signing tools comes before any attack on our environment. What matters is the receiving end and how it handles EKUs. To start with, I looked at Microsoft Sign Tool, which lets you sign over 25 unique file extensions. This utility is most often used for authentic code and is included with Windows SDK. First, I performed a sanity check and tried to sign an SSL certificate, sorry, tried to sign an executable with an SSL certificate. As you can see, Sign Tool by default does validate the EKUs of your code signing certificate. We need to get around this. Using Ida Pro, I was able to quickly find the function responsible for the check by looking for the EKU filter error string. The function was conveniently named filter certificates. I patched the function to immediately return and avoid filtering entirely. Next time I tried signing my executable, it worked without a problem. Sign Tool isn't the only utility relevant to this project. The manifest generation and editing tool is used to create and modify application and deployment manifests. Part of this tool includes the ability to sign manifests. We'll discuss how these manifests are used in a later slide. Like Sign Tool, Mage performs EKU verification when signing a manifest. Unlike Sign Tool, Mage is written in C-sharp. How can we patch its EKU check? I use DNspy, an older .NET assembly editor that allows you to both decompile and modify C-sharp applications. I found the responsible method can sign with by looking for the relevant error message. Using DNspy, I modified this method to always return to RU, bypassing the EKU check. And as expected, this modification allows you to sign manifests with an unrelated certificate. Now that we've prepared our test data signed with our SSL certificate, let's try against real-world authentic code implementations. All right, some background. On Windows, the primary API commonly used to verify the trust of supported objects is WinVerifyTrust. This function abstracts the job of signature validation to subject interface packages and trust providers. SIPs are responsible for the format-specific verification of digital signatures. For example, a portable executable will store digital signatures in a different format compared to a PowerShell script. The key is that there's a SIP for both to handle both formats. In this talk, we'll briefly review this architecture, but if you're interested in understanding this design in detail, I would strongly recommend that you read Subverting Trust in Windows by Matt Graber. Trust providers don't care about the SIP you use. They are designed to perform format-agnostic trust verification actions. Common providers include GenericCertVerify, which will verify a certificate. GenericChainVerify, which will verify a certificate's chain of trust. And GenericVerify alone will verify a file or object according to the authentic code specification. The GenericVerify provider is most commonly used for verifying authentic code formats. As a simple test, I wrote a small test application to verify an executable signature using WinVerifyTrust. Unfortunately, or fortunately, depending on how you look at it, it didn't work. I received an error stating this certificate is not valid for the requested usage. Remember the GenericVerifyTrust provider? Unfortunately, it turns out that one of the basic requirements for this provider, which is used in almost all authentic code formats, is that the code signing EKU is present. This meant that most of these formats that are verified through this architecture by default are protected. Although Windows may have gotten it right, I was curious about other libraries that validated the authenticity of authentic code applications. What about workloads that needed to verify these signatures on other operating systems? Unfortunately, as we'll soon see, several authentic code implementations outside of Windows itself were vulnerable to this attack. So the Mono project is an open source equivalent to Microsoft's .NET framework. Microsoft sponsors the project and Mono is frequently used in cross platform applications that wants to use .NET. Mono supports authentic code signing and verification with its authentic code de-formatter and authentic code formatter classes. How can we test Mono's authentic code implementation? Well fortunately, it comes with a check trust tool that allows us to verify the signature of an executable. Unfortunately, due to a lack of EKU validation, the Mono's authentic code de-formatter class is vulnerable to the attack. This issue isn't specific to Mono's simple check trust testing utility. Any Mono application that uses this class for verifying untrusted executables is potentially vulnerable. Trail of Bits is a security consulting firm that helps organizations engineer secure applications. They have quite a few open source tools and one of those projects includes authentic code. Authentic code is a cross platform library that allows you to verify a digital signature of a portable executable. The intent is to provide similar functionality to API like WinVerify Trust on non-Windows platforms. Unlike WinVerify Trust, however, authentic code only verifies an authentic code signature. It doesn't verify a certificate's chain of trust. The chain of trust isn't relevant to EKUs. You can verify that a LEAF certificate is allowed to be used for code signing without verifying that it was issued by a trusted root certificate authority. Unfortunately, authentic code was vulnerable to the EKU attack, allowing attackers to sign code with an unrelated certificate. Fair disclaimer, the real-world implications of the EKU attack with authentic code are limited. For example, according to the authors I spoke with, authentic code is frequently used in CI CD stacks for basic authentic code validation. Of course, given its open source nature, it's hard to quantify its use with certainty. I also wanted to review a bonus example. When I was reviewing authentic code's implementation of signature verification, I noticed that it deviated from the authentic code specification. On the left, we have the same diagram from the earlier slide showing a high level overview of how portable executables are verified under authentic code. On the right, I've modified the diagram to reflect authentic code's implementation of the specification. So according to the authentic code specification, you're supposed to hash the contents of an executable by concatenating the hashes of the PE header, sorted sections, and any extra data. Authentic code ignored these PE sections. Instead, they'd hash the PE header and the rest of the file, excluding the security directory. From a practical stance, what this meant was that it would take everything before the security directory and everything after the security directory, and then combine the two and calculate a hash on that result. On the earlier slide about attacking digital signatures, I mentioned how you shouldn't ever roll your own crypto or deviate from this specification. I was curious, what are the implications of this deviation? Unfortunately, this deviation led to a pretty big flaw. I found that by embedding the security directory within a PE section, I could modify the code of an executable without changing its authentic code hash according to authentic code. The problem was that since authentic code skips over the security directory, it would hash everything up to the modified PE section, ignore the security directory with malicious code, and hash everything after. You could move around the security directory and the authentic code hash as calculated by authentic code would remain unmodified. I was able to leverage this attack to replace the entry point of a legitimate Microsoft executable with malicious shell code. This issue is not related to EKUs. It was simply an extra implementation flaw that I stumbled upon. But it highlights the importance of sticking with the specification and how one small mistake when implementing digital signatures can have a devastating impact. Let's discuss ClickOnce. ClickOnce is a deployment technology that allows developers to create self-updating applications that can be installed or run with minimal user interaction. Under the hood, ClickOnce deployments are made up of application files, a deployment manifest, and an optional application manifest. The cool part of ClickOnce is that you can install or run applications from a website. The picture shows what a ClickOnce prompt can look like, which can be triggered by a link in a browser that supports ClickOnce. ClickOnce applications can be signed automatically with Visual Studio or manually with Mage. The diagram below demonstrates this. First, we take an unsigned ClickOnce deployment. We signed the application manifest in the application files directory. We signed a deployment manifest in the application files directory, and then we optionally sign the application manifest in the root directory. This gives us a signed ClickOnce deployment. Once the deployment was signed, it was uploaded to a web server for testing. The picture below shows you what this ClickOnce prompt can look like in Microsoft Edge. It asks, do you want to open Test ClickOnce application from the domain that you're visiting? The page itself can have any content you want. Unfortunately, ClickOnce was impacted by this attack. As you can see in the picture below, the publisher shows as the domain that I had registered an SSL certificate for. If you had tried to use a certificate that was not issued by a trusted authority, well, it would show that error in saying that the certificate is not trusted. But unfortunately, because ClickOnce didn't actually validate EKUs, it considers the certificate that I signed my deployment with as legitimate. What about other uses of digital signatures that we didn't cover? I want to talk a little bit about some related work. In 2014, researchers from the University of Texas and California released using frank inserts for automated adversarial testing of certificate validation and SSL slash TLS implementations. As the name suggests, the researchers generated mutated certificates with different combinations of extensions and constraints. The idea was to look for inconsistencies between implementations. Now, this paper exclusively focuses on SSL and TLS implementations and open SSL, NSS, et cetera. Of relevance to this talk, the researchers found several libraries that failed to validate the appropriate EKU. Instead of malformed certificates for digital signatures and executables, they looked at abusing them in the browser. If you're interested in reading more, I'd recommend that you check out the link below. We additionally verified that common libraries like open SSL validated these EKUs appropriately. Now, if you use these libraries for solely verifying the authenticity of a certificate without specifying a purpose or there's no default, you can still be exposed to an EKU attack. For example, in open SSL, you can specify context for X509 verification, as seen in a picture below, which can include these EKU defaults. For example, if I set the context that I'm an SSL client, open SSL has the correct defaults to validate that the client authentication, excuse me, server authentication EKU is present on the server certificate and vice versa if you say that you're an SSL server. Let's review some of the takeaways and techniques. In this project, we discovered numerous implementations of digital signatures that failed to validate the extended key usage field. This would allow an attacker to abuse certificates with a substantially lower bar for identity verification in important context, like code signing. The diagram below demonstrates this attack. First, an attacker buys or generates a low-cost certificate, whether this be SSL, SMIME, etc., is up to them. Next, they sign unrelated data with the mismatched certificate. This could include modifying the signing tools to allow this in the first place. Then vulnerable apps fail to validate and detect the invalid certificate. As a result, users can receive a false sense of trust. It can bypass important access controls and more depending on the context. So how do you protect your implementation? Well number one, always fix and excuse me, always validate EKUs in your application to ensure that certificates are used for their intended purposes. Use libraries that properly implement EKU checks. Trust but verify key principles in your application's design. Every single implementation that was vulnerable to this attack we covered today were supposed to check for EKUs. No one verified that they did. Implement regular security testing, especially in components using cryptography for crucial features. Your implementation is much more likely to be vulnerable than a cryptographic algorithm. What is Microsoft doing to protect customers? Well number one, we've released patches for all first-party issues. The third-party vulnerabilities we've discussed were also fixed. We're continuing to work with impacted third-party vendors to address their implementations. And we also continue to explore issues in digital signature implementations. Below is a list of relevant CVEs that were issued as a result of this work. Thanks for coming to my talk. And now is the time for questions as time permits. So the question was would cert pinning help in this context? So it can help in the sense that if there's only a certain subset of trusted authorities that can issue certificates that your application accepts, well then it becomes much harder for an attacker to generate a certificate like an SSL certificate under that authority, right? So it would have to be, let's say you're using your own CA. Yes, that can definitely protect it as long as an attacker can't generate a certificate under that CA. But if it's something like you use DigiServe or something, some specific trusted authority that an attacker could be able to access or generate certificates under, then it doesn't help much in that context. If there's any other questions, feel free to come up to the stage and I'd love to discuss the issues with you.