 Up on stage, we have a returning goon who wants to demystify some vulnerabilities and why they're mystifying. I don't know. I don't understand the words in front of me, but reverse engineering the eternal exploits. Who's excited about this? I see the room is getting pretty packed. Life! There's people alive in here. That's wonderful. Alright, with no further delay, I'm going to hand it over to zero sum, zero x zero. Thanks everyone for coming. So from a show of hands, how many people pop the shell with MS1710 over the past year and a half? Yeah. How many people are finding it everywhere, right? How many people helped with WannaCry and Netpatch and the cleanups of those, right? Yeah, yeah. It's like I said, it's been a year and a half since they got introduced. We're still finding them everywhere on our red team. It's a path to glory on our pen test. Before I do get started, this is top secret information classified on, we don't know exactly what it is, but it's allegedly from the NSA, stolen from the NSA. So if you have a clearance, Francis CKRS, it was at DerbyCon last year. He gave a talk on dander spritz and he gave a disclaimer, so I thought I would give one too. So everyone leaving the room right now is a fed, and you should keep track of their names. And you're all stuck for 45 minutes. So I did have to cut a lot from this presentation that I wanted to go over just because of time. The goon probably will hook me off stage in a little bit here, but hopefully we can get through it. So if anyone's not aware, the Equation Group is the tailored access operations former department at the NSA that wrote a bunch of exploits. And they've never really gotten a lot of public credit, but just like hacker to hacker and all the politics and everything aside. And just looking at the technicals, it's a very talented team. And so they deserve a lot of credit. And then the shadow brokers, we don't really know who they are. There's some evidence that they might be Russia. They claim they're an inside job if you read their messages, but they came around and started dumping these exploits about a year and a half ago over the course of a couple of years. It's been going on. They've been pretty quiet for the past year. So I'm going to try and get through SMB internals real quick, and then we can get into the eternal exploits, which are all SMB v1 vulnerabilities. So SMB was invented in 1983 by a guy named Barry Feigenbaum at IBM. He also worked on the NetBios, RFCs, and stuff like that. We originally saw it. There was a product from Microsoft called Land Manager. It was played or built into Windows, and pretty much all modern versions of Windows have SMB built in. And it's a very extensible protocol, so you can build things on top of it. And so that's where we get things like PS exec running on top of the distributed computing environments remote procedure call, using SMB as a transport. So SMB v1 looks, this is kind of the packet layout that you would expect to see from a normal SMB, a well-formed SMB. So normally you'll send a message block request, and then if the server processes that request, it'll send you a SMB reply. So both replies and responses look very similar. They have a command, which is like the opcode. Hundreds of commands in SMB v1. There's a flags field, which tells you this is a request or a reply. Are we talking unicode, that kind of thing. If you're getting a response from the server and it wasn't able to process that SMB, there was an error. It will set an error number for you. This is also where SMB signing is located. And then there's the user ID, tree ID, process ID, and multiplex ID, which we will talk a little bit more about later. And SMB also has a parameter block, so depending on what that opcode is, there's going to be a structure associated with that opcode, and that's generally where the structure is going to end up in the parameter block, so it's a fixed size depending on the opcode usually. Then data block is just arbitrary data associated with the command. So if you think about it, this way if you're creating a file, your command is like create file or whatever, the parameter block will be the file name, attributes, and then the data block would be like the file data. If you want to think about it, it's an oversimplification, but it works. So SMB v1 has DLC, so the later version you have, the more features you unlock. Pretty much all of these features have been available in the latest version, or since the earliest versions of NT, so you don't have to worry about it. All these exploits they date back to the early 90s before NT was officially even released in the late 80s. So the main driver that all of these vulnerabilities are going to be in is the service.sys driver, which is the SMB v1 driver. Those who have done low-level, high-concurrent networking will be familiar with a pattern called load balancer. So what you're doing is you have producers taking the network traffic and then consumer threads that are eating that traffic. So we're working with queues from computer science, you know, first in, first out container. And this is because SMB is designed for speed. And what you're actually producing and consuming are these things called work context, which is, so there's hundreds of SMBs and all of them can be kind of pigeon-holed into this mega-C structure called a work context. And that's what's being processed. But the really important thing to note on this slide is that when the server receives a SMB request and it's going to process the response to you, an SMB can be sent to the back of a queue multiple times while it's being processed. And so there's two different types of queues. There's the normal queue coming off the network. And then if something's going to take a really long time, it'll get sent to a blocking work queue. In Vista, they introduced SMB2 and they stripped the networking portion out of the services driver and put it in a driver called servenet.sys. So this is what's actually binding the SMB ports. And then serve.sys and serve2.sys come along and they register callbacks with that driver. And so whenever traffic comes in, they inspect the SMB and they're like, that looks like SMB1. My driver can process that and then that happens. There's only a few SMBs that are ventrists. In the negotiate stage, we pick our SMB dialect, either that NTLM or Cairo. And the server will make this connection struct. Later when we go to log in, it's called a session setup. The server will create this structure called a session. It'll have a pointer to that connection. It'll also have our user name, our user domain. At this point, the server will assign us a user ID. And when we log in, when we do the session setup, we tell the server a max buffer size that our SMB client is able to process. So we say if an SMB response is going to be bigger than this max buffer size, send it to us in a multipart SMB. And so mainly what you're doing with SMB is connecting to trees, which are basically shares. There's one that we're going to be connecting to in all these exploits. I mean, you can connect any tree, but the one that's usually open for anonymous logins is the inter-process communication share. And when you connect to a tree, like that IPC share, the server will assign you a tree ID. So now I'm going to talk about transactions, which are a special subset of SMBs that are what all these exploits take advantage of. So a transaction, you can think of it as like an octal. They perform a variety of functions. Most of them are file system-based. But what's interesting about transactions is they can be split apart across multiple SMBs. So you'll send the primary transaction. It'll be, I have, you know, this much data to send you. The server will send an interim response saying, okay, I accept that, send me the rest of it. Then you'll send a bunch of secondary transactions filling out that whatever data you say you're going to be sending. And then when it finally gets all of them, that's when it's going to process and send a response to you, which can be broken up as well. But it's kind of like a database transaction. You know, like it's an atomic thing as soon as that last secondary transaction comes in. That's when it's going to be processed. So a transaction is kind of like a message block inside of the server message blocks. So in addition to the SMB parameter block and the SMB data block, you'll also have the transaction parameter block and the transaction data block. There's different types of transactions over the years. So trans and trans one, as I might refer to it, is old stuff like mail slots and the remote access protocol. Trans two introduced support for greater than the old DOS style 8.3 short names. You'll see a lot of OS two to NT conversion stuff in there. And then NT trans, the parameter and data block sizes were changed from shorts to longs. So that's the major difference there. And they all have different dispatch tables that perform different functions. So when you send a primary transaction, you'll have the parameter offset and the data offset as part of that SMB. That just tells the server how far into that SMB the trans data and parameter blocks actually begin. There'll be a count. So that's how much data and parameter you're sending per this SMB. Total count is how much we're expecting between all primary and secondary SMBs, transaction SMBs. And then the max count is we're telling the server, if you're going to send us a reply for this transaction, this is the buffer size that we're going to reserve for that response. So don't send us more than that. When we send our secondary transactions, a lot of the fields are the same, except for now there's a displacement field. And that's, so as we're sending, you know, piecemeal transaction data, we need to tell the server where in the buffer it needs to write this packet at, because it's not keeping track of that. That's our job to do as a client. Part of the problem really. And so when you do create a transaction, when you send a primary transaction, the server will create a transaction structure. You can see we have pointers to our connection, our session and our tree connect. Then you'll see we have buffers for the incoming parameters, the outgoing parameters, the incoming data and the outgoing data. A lot of times, the server will reuse the same buffers. So the request buffer, it's, you know, smart about it. It doesn't make two allocations, not always, but sometimes. And then you'll see a transaction also has a tree ID, a process ID and a user ID. So I talked about user ID and tree ID, but process ID is just our client saying any random number, really. But that's like our process. So when we allocate or when we send a primary transaction, the server will call a function called serve allocate transaction. And yeah, the nice thing about this driver is a lot of the symbol names are available through PDB files on the tech net servers. So it allocates a transaction. Generally, the minimum size is going to be 0x5000. And the reason for that is it's going to grab it from a memory look side list. Not always. And then the maximum allocation size is going to be 0x10400, generally. Otherwise, you'll get a error saying you tried to make a transaction that's too big. When we send secondary transactions, that transaction is already allocated. So it's going to call a function called find transaction. And it's going to look it up by our user ID, our tree ID, our process ID. And then we have this other info field, which is generally going to be a multiplex ID, which is just another random number that we can send as the client. And that lets us have multiple transactions going. And the server knows which ones we're actually talking about when we send transaction packets. Another thing to notice is that all of these structures I talked about are reference counted. So if you want to think about it like C++ smart pointers, only like done in C, when you reference it, number goes up, when you dereference, it goes down. And then eventually, when you hit zero, it gets freed automatically. And so that should be enough background. So there's a concept with files called extended attributes. And so this is just name, value, key pairs, metadata attached to files. The concept was introduced in OS 2, 1.2, which is an old Microsoft IBM operating system. They had the high performance file system. Windows NT, we don't really see extended attributes that much anymore. There's a thing called alternate data streams, which most malware analysts are probably aware of. But one thing I was reading, modern use of extended attributes is for the Windows subsystem for Linux. They use it to store file permissions and case sensitivity data. And then in SMB parlance, there's fee and key. So fee means the structure has both name and value and key or full extended attribute. And then key is a get extended attribute, which is just the name. So you might send a key to get a fee. So here's what the OS 2 fee structure looks like. It starts with the flags field, which is either zero or 80. Zero means this fee or this extended attribute is not really that important. OX80 means if you're going to copy this to a file system that doesn't know extended attributes, think twice. Because it's an important extended attribute. And then it has a count of bytes for the name field. And then a count of bytes of the value field. Then immediately following that, it'll have a buffer or it'll store the name field, which is a C string. So it's null terminated. And then the value is not null terminated because it can be just arbitrary binary data. But one extended attribute by itself isn't very useful. So you usually find them in a fee list. So this structure has the count of bytes of the entire list. And then a bunch of those fee. And then you can get the size of the fee, its name plus its value plus the size of the structure. And you can loop over this fee list structure and read each individual fee. With Windows NT, they added extended attributes, but they changed the structure a little bit. So you see we still have flags. We still have a name length and we still have a value length. And then we have the name buffer and the value buffer afterwards. And then there's an alignment. They align it to d-word. I guess certain, maybe certain CPU architectures, they wanted to support a need of that alignment or something. But there is no separate fee list structure. There's just this next entry offset. So you parse a list of fees until that next entry offset is 0. So parsing is a little different. Here's the side of the bug of internal blue. The main bug, anyways. What this function is doing is it's calculating when we send a OS 2 fee list over the network, the server needs to convert that into an NT fee list. So this is just calculating how much size it needs to reserve in memory. And then on that vulnerable line of code what it's doing is if the size of our OS 2 fee list that we sent is bad, it's going to try and fix it for us. I don't know why it doesn't just reject the packet there. It might be supporting older clients or something. But I mentioned that the count of bytes of the fee list is a U long. What you saw there was it's putting it into a U short. So it's casting it wrong. So if I as an attacker say, here's OX 10,000 bytes of fee list, then my high D word is set. When that function comes along and it says, oh, I only see FF5D valid fee list there. When it casts incorrectly, you'll see that high D word is still set. And it thinks the size of the buffer is bigger than it really is. And then when it calculates the size that it needs to reserve from the NT buffer, it's only going from the correct casted variable. But here's what it looks like in code. Most people are probably familiar with x86, x64. You can see that we're working with D word registers. And then at the side of the vulnerability, it's moving a word pointer. So I'm going to explain the same thing just a little bit more clear here. So as an attacker, I'm supplying this fee list in an SMB. And I say, here's my OX 10,000 sized fee list. Then you'll see there's a bunch of what I'm referring to as null fee. And that's just where the name and the value are both zero. It's an exercise for you to figure out why that would be the most efficient way. But basically five bytes of OS 2 fee here become 12 bytes of NT fee because there's more data in NT fee. And so this really is the most efficient way to pack it. But then as it's parsing through all the fees, it gets to the end of this buffalo overflow fee. And it sees that the start of that fee plus the length of that fee exceeds the list, the CB list size, the ROX 10,000. So it says, I'm going to do a great job and correct that for you. And then I'm going to reserve an NT buffer. So then in another function after these buffer sizes have been calculated, it's going to go through and it's going to copy each fee one by one doing the conversion to NT fee. And then when it gets to that buffalo overflow fee, it's going to exceed that buffer. And then you can see if we keep copying, we're going to hit unallocated space and crash the target. So we can send an invalid fee. That's just where the flags are not zero or 80. And when we send that fee, we'll get a SMB error from the server that's invalid parameter. And so that's a really good sign for us. That means that the overflow fee happened. We didn't crash immediately. We may not be Gucci, but it might still crash later. So we're looking for the path of least resistance to trigger this bug. Some of the functions that call these vulnerable functions require a little bit more access or access to name pipes. So this trans to open to is the best way to do it. You're opening a file, but you're also creating one. And you can see that it takes a extended attributes list for that. And so you can set most of this packet or this SMB to just saying default values and then put your exploit fee list there. Another thing, another bug. Like I said, we're sending greater than 0x 10,000 data. But with a trans to request, data and parameter blocks are only word sized. So what's going on here? If you look at the wire short capture, it's first opening with an NT primary transaction. And then it's sending trans to secondary transactions. So the bug is it doesn't matter what your primary transaction is. It doesn't matter what your secondary transactions are, except for that last one, when the transaction gets executed. That's when it's going to choose the dispatch table. So we can reserve, since NT allows us to do a D word sized parameter and data blocks, we can use this to help us trigger the bug. There's another problem with session setup allocations. So there's many different ways to log in to SMB. At least two ways are anti security and extended security. And depending on the flags of the SMB, you can actually confuse the server and it'll read from the wrong offset where it should be allocating the size of that SMB's data block. So this bug doesn't really let you do much in terms of exploitation. It does help you groom the pool, which we're going to get into. But it basically lets us reserve a large amount of memory and then if we close that connection, it'll free that memory immediately. And this is still in the master branch of Windows. Pretty sure they still haven't fixed it. But like I said, it's not really a vulnerability. It's just weird quirk. So now we have all the ingredients we need for eternal blue. We have the exploit connection. So we're going to be opening many connections to the server during the exploitation process. On one connection, we're going to be setting the exploit. On different connections, we're going to be setting that session setup bug that lets us reserve large amounts of memory. We're going to make an allocation and a whole connection. And then what we're actually going to try and overflow into is a servenet.sys network buffer. So when servenet.sys sees network traffic, it's not just a buffer, it's a structure and then a buffer that follows it. We're trying to overflow into that structure, at least for Windows 7. Windows 10 gets a little weird. But we're going to send primary grooms, secondary grooms. They look like SMB2 packets. There was a little confusion in the early reporting that some of these bugs were SMB2 and 3. They're all SMB1. But it's before like servenet does callbacks and either serve2 or serve1 handle it. The only thing that I've seen credible claim is that it might be an IDS bypass because eventually after we overflow these servenet structures, we're going to send it to the shell code and all that over it. So maybe if it looks like SMB2, that was an attempt at an IDS bypass. So before we start the exploitation process, there's a servenet, the network buffers, they have look aside memory. And then there's just me random stuff in the pool. The first step is we're going to send our primary exploit transaction and all of the exploit transactions with the fee list in it, except for the last one. So nothing really going on in memory yet. Just as soon as we send that last transaction, it'll trigger the bug and do the overflow. We don't want to do it yet because we haven't groomed the pool. Then we're going to send the initial grooms. So these are just basically naked SMB. It's before either SMB1 or 2 takes over. And what we're trying to do here is force new pool allocations. Then we send, open a new connection with the, that allocation bug. We were going to reserve a large memory block, but it's not going to be the same size as our incorrectly calculated NT buffer. We're going to send a whole buffer. So this is the exact same size that the NT fee buffer, eventually the incorrect sizes. We want it to fit in this hole. Next, we're going to close the allocation connection. This lets just random stuff in the pool come along and allocate memory without messing up our exploitation process. Then we're going to send the secondary grooms, look exactly like the primary grooms. We're just hoping that one ends up after the whole connection there. And then we're going to free the whole connection and we're going to send the last exploit transaction. And that's going to think it can fit in that hole. And then during the memory copy when it's parsing all the fee, it's going to overflow into the headers of the next. So what are we actually overflowing here? Like I said, it's not just a buffer. There's a structure, a couple structures. You'll see there's a MDL, which is a memory descriptor list. It's a common NT structure that lets you map virtual memory to physical memory. So we can overwrite that one of those MDLs and depending on what address we get it, that's a right, what way or primitive. Once we overwrite that MDL, any network traffic we send over that connection. Instead of going to the buffer, it's going to go to wherever we overwrote. So the Hal Heap, until the very latest versions of Windows 10, they were not ASLR. On Windows 7, it's not DEP either. Then you'll see we're also overflowing a pointer to this WSK Winsock structure. We point that pointer at the Hal Heap as well. And then we send a fake structure, which I'll do on the next slide. Then we also send our shellcode at this time. So that fake structure that we overwrote has a function table, as the most important member that we're worried about, everything else, same defaults. So we send the shellcode to all of those primary grooms and secondary grooms, they're all separate connections. We don't know which one actually got overflowed to the right, what way or primitive. So we close all of the groom connections and then they're going to go through and they're going to call these handlers for when the connection closes and eventually it's going to hit the function table, which we have conveniently pointed the clean up function to point at the address of the shellcode, which is on the Hal Heap. But it's still not that simple because in Eternal Blue at that point, we're operating at dispatch level in the kernel, which means that a lot of common functionality, libraries and stuff, functions exported are off limits because we don't have access to things like paged memory. So one of the quickest and dirtiest ways that you can get from dispatch level to passive level is to hook the syscall table. Then the next time a syscall happens, instead of going to the normal syscall handler, it'll come to our function. We'll transition gracefully from user mode, you know, we'll set up the kernel stack and all that. And then we'll run the main stage, right, the double pulsar backdoor, which is going to backdoor the serve transaction to dispatch table. And then after we're done running double pulsar, we will restore the syscall handler. And I'm going to go into double pulsar a little bit later. But basically, here's the patch. They just fix that cast from a short to a long. It's pretty straightforward. And yeah, all these patches are one-liners. So Eternal Champion, transactions, if I try to send secondary transactions after a transaction is already executing, it'll have this executing, boolean locking mechanism set. So before it executes a transaction, it's going to set that locked variable to true. And then if I send a secondary, it's just going to reject it. Except if I have a primary transaction where I send all of my data and parameter in one primary transaction, I don't really need a secondary transaction. The bug is they forgot to set that lock. So then while that transaction is executing, we can come on by and send secondary transactions and actually modify the data of that primary transaction. So this gives us an info leak on a single core processor. And then there's a stack overwrite. It seems to only be triggered on multicore. And I believe it's Eternal Champion because champions win races. And this is basically a race condition. So in order to perform the exploitation, we need to leak a transaction. We need kernel addresses, that kind of thing. So there's another, so the first thing we can do with this race condition is we can look for an SNB which echoes data back. On older versions of Windows, the remote access protocol has Wnet account sync and that's server enum 2. Those will echo data back to us. On every version of NT, you have NT rename. The only difference is that that requires a valid file ID. So you have to open a name pipe. And so there's a little bit more permissions associated with that. But basically, all we're going to do is we're going to send a primary transaction where the data is greater than... Jesus. Sorry, I spilt some water. Just in case you needed a quick review. So we send a primary transaction where the amount of data in that is greater than when we logged in with the session set up. We told it it's max buffer size that we can expect for a reply. So the amount of data it needs to echo back to us can't fit in one reply. So it's going to send it to the back of a work queue. And then while it gets sent to the back of the work queue, we can have another secondary transaction come in and modify the amount of data, the data count on it. And then just because there's bad validation, this does let us, when it goes to read data back to us, it'll read past the buffer into another transaction. So here's the code execution path. So there's a transaction, I believe, too, called query path information. And part of its parameter block has a sub command. So the first step we're going to do with that sub command is we're going to say I want a query and extended attribute size. And that's going to send us to the back of a blocking work queue. So when we send secondary transactions, we're on the normal work queue. And then the transaction is also being processed on a blocking work queue. The second step is so after we've triggered that, we have another transaction secondary come by that modifies the parameter block, the transaction parameter block. And then we change the sub command to a is named valid. And this is pointing at a stack variable now. It changes our end data pointer to a stack variable. And so with that end data pointer, pointing at a stack variable using data displacement, we can get past things like stack canaries and stuff, and we can overwrite our return address to our worker thread with the secondary transaction. Sorry, running low on time already. So basically when we send the sexploitation sequence, it's going to be eight SMBs in one TCP packet. The first one's going to be that query EA size primary with all of the data and all the parameter. And so that's going to cause the blocking work queue to be triggered. And then we're going to send a transaction secondary that changes it to the is named valid sub command, which is making a point at a stack variable. And then we're going to send six transaction to secondaries with a data displacement that's going to overwrite the return address. But it's a race condition, so we send eight packets per exploitation attempt. So we attempt, we see if double pulsar has been installed. And if it hasn't, we run it 42 times by default. When we get code execution on a depth thing, if the thing has depth, we'll search the connection transaction list. We're looking for a special identifier at the start of one transaction. And so this is basically an egg hunter. So we're going to store the shell code. At this point, we have access to allocation functions of the pool, so we will copy the payload from that egg and then run it. Then we will increment the amount of available threads on one of the structures that we get passed into our shell code. And then we can resume execution with little NT magic. So there's the processor control region, which is a global variable in the kernel. Just going from there, we can get to the current thread start address and then just jump to it and resume execution in the worker thread loop. So here's the patch for eternal champion. This is primary transaction. If all data was received, it began executing the transaction. After the patch, it set that executing variable to true. That's it. So I talked about when secondary transactions come by instead of allocating a transaction, it's looking up a transaction. Generally, it's going to be a randomly generated multiplex ID, but there's a special SMB called a write index. And if you open a file in raw mode with write index, it makes a transaction instead of whatever they do for everything else. And with this weird pseudo transaction, that's not really a transaction, as they're copying data that you're sending to write to that file, they'll increment the end data pointer of the transaction. So we can cause a type confusion sequence here. So we do an nt create index for opening a named pipe. The server assigns us a file ID. Then we're going to create just a normal everyday transaction, nothing special. But we're going to set our multiplex ID to the same as that file ID that just got assigned by the server. And so the server's going to allocate a transaction. Then we're going to do that write index request with the FID. It's going to see, oh yeah, there's a transaction, a transaction there. And it's going to increment that data, that end data buffer pointer. So this is going to allow us to shift the pointer. So what we're going to do first is we're going to groom the pool. So there's an exploit transaction and then a victim transaction right after it. Normally our transaction end data pointer will only be able to using displacement and all that. We can only access our data buffer. But after we do the shift, that pointer got incremented. So if we send a secondary transaction now, we can write past our buffer. So there's another bug that lets us get an info leak, because again we need kernel addresses, that kind of thing. So normally transpeak named pipe, you're just peeking a named pipe. It expects the max parameter count to be 16, but it takes the client value. So if we are allocating from a look aside list, we can set that max parameter count to most of that OX 5,000. And then we'll set the max data count to one. You notice a really tiny value. Then because there's bad checking and the way that it writes where it writes the data when you're peeking that name pipe. Basically if we can put greater than one data to be caused into that name pipe, then it'll be, it'll just read past the buffer when it replies to us. So there's different ways that we can groom the pool. Fish and a barrel affects older versions of Windows, I think it's up through Vista. But basically what it was doing is when the serve.sys driver started up, it would create a preallocated heap. And so with a preallocated chunk of memory, we're not fighting other drivers and stuff. We're not going to the pool. So it's really convenient. It's also great because this private heap is only for very specific MSWrap transactions, which are very rarely used these days. So it's a very straightforward heap feng shui. That's what it looks like. We're sending victim transactions, they're called fish. And then we have a dynamite, which is just a transaction that with the MID set to the FID. So it's eligible for that pointer shift. And then we'll just send more victims. We'll send another dynamite in case the first one failed for whatever reason. So it'll just groom the pool that way and then attempt exploitation. Match pairs is all versions of Windows, including 7 plus. So when they remove fish and a barrel, that private heap, you still have this groom available. The only difference is that instead of having that private heap that no one's using, now we have to go to the normal page pool, which is what everybody, every process, everybody, everybody wants page pool. So it's very contentious. So this is a little oversimplification just for time and space of the slide. But we're going to send these groom transactions and they're going to take up pretty much as much as they can of several pages. And then on that last page, it's going to leave a little bit of extra space at the end. And so that creates a special kind of pool called a frag pool. And so this is, we're just filling up memory at this point. Then we send exploit, something eligible for that pointer shift. And there's a little extra going on there. But basically, yeah, we just send an exploit pointer shift thing. And then we come along with the brides, which are specifically designed to fill that gap. And so we're only sending like 10 or so grooms. We send 48 brides. And we're hoping that one of those brides ends up after one of our exploit pointers inside that frag. So now that we have the pointer shift and we can write into one of these victim transactions, we can create a write what we're primitive out of it. Basically, we modify using our exploit transaction that's been shifted. We modify our victim transaction. It's in data pointer. We point it to where we want to write. We set that executing variable to false. Some other clerical things. Also increase the reference count of the smart pointer type thing. And then when we send a victim transaction secondary that whatever's in our data block is what we actually want to write. Read where we modify the victim transaction to point at the leak transaction. And we can get the address of the leak transaction. We can infer it's address by its contents. This time we set the out data pointer from where we want to read. We change it set up to a peak name pipe. And then we set max data count to how much data we want to read. Then we send a leak trans secondary and it will echo back out data which is pointing from where we want to read. So we have read write primitives. We're on a quest to find somewhere to store the shellcode. If we set the victim transactions out parameters to null, and then we send a secondary transaction, it will change that out parameters to point at the work context response buffer which is read write x memory. And then we can use the read primitive to read the address that just got set and the write primitive to write the shellcode to that location. And now we're on a quest to execute the shellcode. So this is similar methodology to what double pulse R is doing. Only we're doing it remotely. So we read in our leak transaction. It has that connection pointer on it. We read in from that connection pointer. It has a variable called end point spin lock and that's pointing to a global variable in the serve.sys driver inside of that PE's data section. So then we just read backwards in memory. We're looking for a special table called the serve SMB word count. So the word count is associated with the size of transaction or SMB parameters. So this is a table that's about 256 entries but only has 100 commands. So anything that's not a legal command is going to be a negative two in this table which is OXFE. So when you see a bunch of fifi when you're reading the thing you know you're getting close. And then so immediately following that will be the transaction to dispatch table and then offsets 14 and 15 into that table are not implemented and we can overwrite OX14 which is with the address of the shell code. And then we send a transaction that triggers that dispatch table to be called. So here's how they patched the info leak before the max parameter count was either the user supplied max parameter count or 16 after the patch they made it always 16. And this is how MS1710 scanners one way you can write one. So I mentioned before that when you allocate a transaction if it's greater than OX10 400 you'll get a status and sufficient server resources. So what we'll do is we'll send a transaction where the max parameter count and the max data count is going to be greater. The sum of those is greater than that OX10 400 before the patch it will reject that packet and send us that status and sufficient server resources. After the patch it's going to fix that max parameter count to 16. And so now it will do a proper allocation. We'll get a little bit further and get a different error message. And so that's how you can tell if the target's been patched. So here's another thing they fixed was if the data count in the named pipe is greater than the max data count the size of the client buffer they'll just fix that. Here's the remote code execution before the patch and after the patch. So before the patch remember it was shifting that pointer during a write index. After the patch instead of shifting the pointer they're just using an offset during the copy. Another thing they did to fix remote code execution and this does help with eternal blue as well is now when you allocate a transaction you set what you the expected secondary command should be. And then later when you go to find a transaction it sees if that new incoming secondary transaction if that command matches up with the same expected secondary command. And if not it won't return it. So now we can get into eternal synergy. So this has the same Buffalo overflow and rewrite primitives as eternal romance. You also get the match pairs and the classic grooming. And I didn't get to go through the classic grooming. But they inadvertently with Windows 8 they patched the info leak that was in eternal romance. So we can't do the normal eternal romance methodology. Instead we do our info leak using the eternal champion methodology. But another thing is the address that we stored our shellcode at last time has became a depth pool which means that it's not executable. So we needed a new way to find an executable portion of memory. So using our read primitives same ones as eternal romance we can read the connections preferred work queue which is going to be it's going to have on it a member called IRP thread which gives us a K thread structure. K threads have a K process and then K processes have a process list entry double linked list only normally a double linked list one goes to one and then going backwards that you can traverse back and forth with the process list entries it appears to me that as you go forward you go to the next process but at each step if you try and go back you just go back to the list head instead of being able to go yeah and so that list head is actually a global variable inside NTOS kernel so you just start reading backwards in memory from that global variable till you get to the mz header and then you can parse the using the remote read you can parse the NTOS kernel P headers and on Windows 8 and 8.1 they have this section inside of NTOS kernel.exe that's just a read write it's names and name of sections read write exec and so the only thing that's really ever legitimately calling this portion of memory is a function called kx unexpected interrupt but this is where internal synergy decides to store the shell code is right there so here's some a good list of resources for this stuff I think sleep use get hub repo is probably the best if you want to look at this at the code level and then Nicholas Jolie of MSRC did a sort of a similar talk at HitCon and then there's there's some more resources Jenna Magius and I's white paper from last year and then if you're interested on the shadow brokers there's more stuff on the bottom there just some archives and I'll also be at DerbyCon doing kind of like a part two for this so that's it thank you