 Hello, Defkhan. Welcome to your house as my house, use of offensive enclaves in adversarial operations. My name is Dimitriy Snashkov and I'm part of Partivity Attack and Penetration Testing Team, where I have a chance to do tooling, offensive research, and automation. Shout out to my team at Partivity for making that happen. Today, we're going to talk about SGR's technology as it applies to offensive operations. Being part of the offensive team and task with testing, I sometimes find myself and we as a team find ourselves on unknown boxes. Sometimes we need to leverage the technology that exists there to be able to withstand the onslaught of EDR inspections or defensive technology inspections. SGR's technology here was a curious case when a developer was using SGR's technology to protect trusted credentials, and so the box was instrumented with SGR's enclaves, which we thought, why not use them? And how can we use them to further our goals of bringing payloads and taking care of secure communication for us? But first thing first, SGR's technology developed by Intel Corporation to essentially protect specific code or data from disclosure or modification to adversarial parties. Adversarial parties defined by Intel or SGR's technology is anybody who is running in non-ring three. For example, privileged system code, operating system, virtual hypervisor, managers, BIOS, all the things that work around the hardware. And so SGR's enclaves were born a technology that solved the issue of protecting areas or tries to solve the issue of protected areas of execution and increased security on platforms that are considered to be compromised from all the contacts that runs around them. So SGR's, as we've defined, SGR's enclave was a trusted code. And it's also linked into application. So the application kind of runs in two modes, split personality modes, right? One is the untrusted part of the code, another is trusted part of the code. The trusted or safe part of the code runs in the SGR's enclave, which we construct. And we interact with underlying bootstrapping and orchestration platform to be able to execute or reach into the trusted area and execute very specific operations from the untrusted memory, which is our application. That's possible by SGR's of introducing two new op codes of switching in and out of the trusted area over CPU, which is locked to the enclave, where enclave is encrypted by CPU key. And so this technology is very kind of prevalent in the high security environments. Obviously, part of the wherever Intel Core processor, six plus generation lives on laptops, business, servers and data centers, but also in cloud virtual machines. Namely, we found it on Azure DC level trusted computing machines, right? And so if we find ourselves as operators on those machines, we might be able to use some of that protection for our purposes. So the offensive goals for us here is kind of twofold. The first is understand the technology, how to construct the application so we can actually invoke, you know, SGX and use enclaves to store our data, which is payloads or other things. Also use SGX technology and SDK to try and secure communications with our C2 without revealing keys that we use for our payload encryption. And in the process, try to kind of, you know, have the EDR divert attention from us by splitting the kind of the deployment model between several components that are not fully assembled or introspective. And so in this case, we're gonna do Windows as an example. We're gonna create a system called Xclave or kind of, you know, design a method of communication between our cradle to load securely our payloads, store them in the enclave on the box, but also hide the algorithm of encryption and the keys that kind of travel back and forth in clear text. And so the Windows is the example in this case, but the Linux side will be pretty much the same in concepts, although implementation may be a little bit different. And hopefully we're gonna have fun going through those exercises. One thing to mention is that this talk is not about SGX vulnerabilities or SGX deep dives. We're gonna touch on some of the relevant parts, but refer to other great talks on that matter. The SGX components that will be interesting for us will be the platform software that gets installed to kind of interact with enclave. If we're dropping into a box that has appropriate type of CPU and we're on Windows, Microsoft Windows machine, then operating system would have already have the driver for it because that's a standard update process for it. But you could obviously have other type of platform software if you're operating directly in the environment that SGX enclaves are used by developers to kind of help their applications be more secure. So there's drivers there and the orchestration software, such as attestation service, which talks about and takes care of the kind of signing and verifying the enclaves themselves to the owner and to the system, i.e. the CPU. And also the second part is the SDK, which we will use as part of the software development to create this application, which will utilize enclaves. There are two SDKs we're gonna take a look at Intel for the most part, but open enclave is also available for our purposes. And so the outcome of our efforts would be an application or a set of applications that will be created with trusted and untrusted parts, trusted being an SGX enclave and untrusted being all the bootstrapping code that allows us to share information with our C2 and process payloads from that. And then we're gonna go into how high-level mapping of the calls into C2, into trusted area happens and how we can leverage some of the primitives in the SGX SDK for our purposes, such as configuration signing and loading. Specifically, the problem of payload transfer can be distilled to a few things. So first of all, we do not want to load payloads in clear, we also always want to protect. And commonly we protect that with some X or key, maybe AES key, but the problem is that the key itself may be available in the memory because it's shared key a lot of times. And so it's inspectable, if not real-time in the sandbox, then in the forensic lab, right? And so if we're running long-term campaign, we want to make sure we protect our keys in memory. And also the other thing is the algorithm itself can be reversed and we can be pretty much our algorithm can be known, but not because we don't wanna share the algorithm, but because that algorithm may point to weaknesses in our communication, which may be introspective and intercepted. And so what we're gonna do, the other goal is not only to store payloads, but also use SGX to secure, communicate out to C2. To do that, there are a few alternatives. There are some crypto libraries that come with PSW, the platform code and SDK. It's SGX TSTC Crypto Library. It is fairly limited in what it does because its purpose is to facilitate jobs of attestation and communication for session management. It's not for general purpose crypto, but we can use some parts of it to construct what we want. We can also bring third party to encryption to work with that, for example, OpenSSL or WolfSSL library. But the problem is target availability. We do not know if these libraries runtime are gonna be available on the target. Plus, we want to stay away from loading things from disk as much as we can and operate in memory. And a lot of times it's too heavy or impossible to load those libraries in memory. And the third possibility is obviously with the limited kind of API that we have inside of the SGX SoundClave, the trusted area. We can roll our own, which is probably discouraged in this exercise anyway. And I've mentioned limited access to API and SGX SoundClave. The reason being is because it is itself, because of its kind of reason for existence to protect the code inside of it, it's devoid of support for SysCalls and it has a very limited IO in and out of SoundClave, mostly for state of preservation, but nothing less. So let's see what we can do. We're gonna take the first approach is actually using TC Crypto and see what we can do, how we can build it. So upon research, we kind of came up with three different things that we can do with that crypto in SDK. We can generate an RSA key, actually a pair, a public and private key. We can actually sign something with our public key and, well, actually encrypted with the public key and signed with private key. And we can actually use a routine that works on AS symmetric keys to be able to encrypt something of a value inside and potentially transfer that something, that piece of code or data outside of the enclave into untrusted area. And so the idea here would be for us to create an application where we do just that. The first step would be to generate RSA keys set inside of the trusted PRM, inside of enclave, give the public key out to our C2, have it stored there and then go to C2 and C2 would be able to generate the symmetric key send it to us inside of the trusted PRM. We're gonna store the symmetric key and then we're gonna have a shared symmetric key without leaking it. So we can kind of generate payload on the C2 side and then keep transferring it into our trusted PRM without having any inspection or being worried about the algorithm disclosure or payload disclosure or key disclosure, right? So it's three step process. First, we're generating public private keys, we're sending them to C2, C2 now has them, then encrypts a symmetric key, sends it to us, we store it in the trusted component which decrypts the key because it was encrypted with the RSA which we already had from previous step. And then we just share the symmetric key between the two and that's how we achieve secure communication. And then, so components that we want to have in this sort of construction is we kind of thought of splitting it in three ways. The application, which will be inspectable by defense and it's loaded from disk, it's your kind of implant or cradle or loader. We went there out of establishing the bridge between enclave and that loader which kind of facilitates in broker's interaction takes data from one passes to another, but it's also a kind of a middle man that can be taken out of the equation upon first load. So the EDR will not see all of the picture and the bridge can come in as a memory loaded module which will be able to kind of broker communication between the two, the enclave and the app at the runtime. And so the bridge is also assumed inspectable and then enclave which is assumed obfuscated. We're gonna have some notes on that later on in limitation section. But yeah, so enclave is where we're gonna store our keys and our algorithm, it will be loaded from disk, but it's also a secure library which may not be introspectable. And yeah, then we need to kind of start building that. And so we come up with that exclave system which we're gonna demo, and then we'll come back to talk more about its construction limitations and all the other things. Let's take a look at the exclave, its components and its operations. Here we have a victim machine with an application which is an agent or an implant. There's a bridge DLL which facilitates interaction with the enclave. It may or may not be on disk, it may or may not come directly from the network and be loaded that way in memory. And obviously, as we mentioned, there is a trusted piece of code that runs in PRM. It makes sense to kind of put the code in perspective in so far as to explain those components. The application finds the bridge, the bridge function is invoked which is exported and found. The bridge itself maps through the EDL to the enclave calls. Here's the EDL, essentially it's a mapping or a matrix of trusted calls that we can invoke inside of enclave and untrusted calls that we are not. And enclave itself is the trusted code that essentially does the processing, does the encryption, and other things that we need to keep secure. And obviously on the other side, it's a C2 that should be able to match the crypto parameters one to one so it's able to successfully decrypt and communicate with exclave that resides inside of the victim machine. And let's take a look at how that works. So essentially we have two screens. One is the victim machine where all these components are deployed and there is a C2. Let's start the C2. It listens on the port and it's responding to communication. Let's start the application. And the first thing it does it's trying to create an enclave. It's a standard procedure to create the memory mapping and launch things into existence. Once this is done, all checks happen. Are we running on the machine that supports SGX? Are we able to create it? What are the parameters and permissions that allow us or not allow us to do this? And then it's trying to generate a RSA key pair for communication. Once this is done, the public key and private key are available. Private key gets stored in enclave and public key gets shared through the bridge into the application which connects to the C2 and solicits for a storage of this public key on our side. Once this happens, the C2 carries out the task. It does its processing, it generates the symmetric key for communication and sends the response back to the application which proxies it to the bridge into the enclave which stores it. And this is what we're doing here. So the shared key is now available and now we are ready to map one to one encryption of the payload or would be payload that would come from C2 into the enclave. Again, through the app, through the bridge into the secure area. And this is what's happening here. We're requesting that payload. The payload gets generated. In this case, it's a very contrived example. And it gets encrypted to match the mode of the capabilities of the enclave. All that processing happens and the payload travels back to the agent and ultimately to the enclave. And enclave having the symmetric key is now able to decrypt the payload. After this is done, the payload gets stored in clear text in the enclave but it's protected from any kind of reachability from the defense and then the attacker can actually work on that. And the last but not least, once we created the enclave, we can destroy it if we don't need it for whatever reason and the duration that we want to use that. Once this is done, everything is good and we are ready to move on. Okay, so we saw a presentation of how Xclave works. There are some assumptions and limitations to this. First of all, it's a bad coding practice. We are weakening the enclaves. We're using it, we're misusing them. And so, but our idea is that while the technology can be used as is, a lot of times, EDRs do not inspect enclaves. In our testing, we were able to use pre-release keys or in pre-release or debug mode, we were able to compile that and then use the byte listing, testing signing keys to do that. It will, in theory, that should prevent us from debugging into it, which is true. The EDRs themselves do not actually make the leap on inspecting enclaves anyway. And so, the other side of the story is that in order to do it properly, you need to assign attestation key and have Intel provision one and sign it with its root key and then you can sign your enclave, which will be undebuggable, right? But in this case, you're running into attestation, meaning attribution issue, right? And so we kind of went the other route and said, hey, what can we do with pre-release or debug versions of it? And so, attestate enclaves are supposed to be inspected, but in practice, they're often not, right? And so, as we mentioned before, as JX provisioned, the PSW services installed or platform is installed and the TC Crypto Library of Cryptographic Primitives is present. That should let us kind of live off the land once we arrive at SGX enabled machine. And one thing to notice is that, how do you help defenders kind of understand what the enclaves are and how to find the rogue ones is that you need to work for signatures, identify non-improved SGX enclaves. The way you would do this is you basically can have really nice tool from Kedelsky's SGX Fund to dump your DLL and kind of see the details of that enclave and kind of latch on to the keys that you have not provisioned. So I'd like to thank everybody who has come to my talk. Here's the link to a proof of concept, the Bridge Library, the enclave and application, which we've used in this presentation. Thank you very much.