 So Samuel is working at Google Project Zero on especially vulnerabilities in web browsers and mobile devices. He was part of the team that discovered some of the vulnerabilities that he will be presenting in this talk today in detail about the no-user interaction vulnerability that will be able to remotely exploit and compromise iPhones through iMessage. Please give Samuel a warm round of applause. Okay, thanks everyone, welcome to my talk. One note before I start, unfortunately I only have one hour, so I had to omit quite a lot of details, but there will be a blog post coming out hopefully very soon that has a lot more details. But for this talk I wanted to get everything in there and leave out some details. Okay, so this is about iMessage. In theory some of it applies, or quite a lot actually applies to other messengers, but we'll focus on iMessage. So what is iMessage? Yeah, it's a messaging service by Apple. We've heard about it in the previous talk a bit. As far as I know it is enabled by default as soon as someone signs into an iPhone with their account, which I guess most people do because otherwise you can download apps. Only anyone can send messages to anyone else, so it's like SMS or phone calling. And then if you do this then it pops up some notification so that you can see that here on the right screenshot, which means that there must be some kind of processing happening. And so yeah, this is like default enabled, zero click attack surface without the user doing anything there stuff happening. And then on the very right screenshot you can see that you can receive messages from unknown senders. It just like says there, this sender is not in your contact list, but all the processing still happens. In terms of architecture, this is roughly how iMessage is structured. It's not very, yeah, anything too interesting I guess. You have Apple cloud servers and then sender and receiver are connected to these servers. That's pretty much it. Content is end-to-end encrypted, which is very good. We heard this before also. Interestingly this also means that Apple can hardly detect or block these exploits though because, well, they are encrypted, right? So that's an interesting thing to note. So what does an iMessage exploit look like? So in terms of prerequisites, really the attacker only needs to know the phone number or the email address, which is the Apple account. The iPhone has to be in default configuration. So you can disable iMessage, but that's not done by default. And the iPhone has to be connected to the internet. And in terms of prerequisites, that's pretty much all you need for this exploit to work. So that's quite a lot of iPhones. The outcome is the attacker has full control over the iPhone after a few minutes. So I think it takes like five to seven minutes maybe. And it is also possible without any visual indicators. So there's no, you can make it so there are no notifications during this entire exploit. Okay. But before we get to exploiting, of course we need a vulnerability and for that we need to do some reverse engineering. So I want to highlight a bit how we started this or how we approached this. And I guess the first question you might be interested in is what demon or what service is handling iMessages. So and one easy way to figure this out is you can just make a guess. You look at your process list on your Mac. The Mac can also receive iMessages. You like stop one of these processes and then you see if iMessages are still delivered. And if not, then probably you found a process that's somewhat related to iMessages. If you do this, you find iMagent already sounds kind of related. If you look at it, it also has an iMessage library that it's loading. So this seems very relevant. And then you can load this library in IDA. You see a screenshot top right. And you find a lot of handlers. So for example, this message service session handler incoming message. And then you can set a break point there. And then at that point, you can see these messages as they come in. You can dump them, display them, look at them, change them. And so this is a good way to get started. Of course, from there, you want to figure out how these messages look like. So yeah, you can dump them in there when they come in and the handler. On the right side, you see how these iMessages look like more or less on the wire. They are encoded as a P list, which is an Apple proprietary format. Yeah, I think of it like JSON or XML. And I guess some fields are self-explanatory. So P is the participants. In this case, it's me sending a message to another account I own. You have T, which is the text content of the message. So hello, 36C3. You have a version. For some reason, you also have an XML or HTML-ish field, which is probably some legacy stuff. It's being passed as XML. But yeah, the whole thing looks kind of complex already. I mean, maybe you would expect a simple string message to just be a string. In reality, it's sending this dictionary over the wire. So let's do some more attack surface enumeration. If you then do more reverse engineering, read the code of the handler. You find two interesting keys that can be present, which is ATI and BP. And they can contain NSKID unarchiver data, which is another Apple proprietary serialization format. It's quite complex. It has had quite a few bugs in the past. On the left side, you see an example for such an archive. It's being encoded in a P list, and then it's pretty much one big array that has, like, every object has an index in this array. And here you can see, for example, number seven is some object as the class NSSharedKeyDictionary. And I think key one is an instance of that class and so on. So it's quite powerful. But really what this means is that this serializer is now zero-click attack surface, because it's being parsed on this path without any user interaction. Yeah, so I said it's quite complex. It even supports things like cyclic references. So you can send an object graph where A points to B and B points back to A for whatever reason you might want that. Natalie wrote a great blog post where she describes this in more detail. What I have here is just an example for the API, how you use it. So this is Objective C at the bottom. So if you're not familiar with Objective C, you can think of these brackets as just being method calls. So this is doing, in the last line, it's calling the unarchived object of classes message for this NSKeyDunarchiver. You can see you can pass a white list of classes. So in this case, it will only decode dictionary, strings, data, et cetera. So it looks quite okay. Interestingly, if you dig deeper into this, this is not quite true because it also allows all the subclasses to be decoded. So if you have an NS something-something dictionary that inherits from NSDictionary, it can also be decoded here, which is quite unintuitive, I think. And this really blows up the attack surface because now you have not only these seven also classes, but you have like 50. Okay, so this is what we focused on when me and Natalie were looking for vulnerabilities. It seemed like the most complex thing we found. We reported quite a few vulnerabilities here. You can see it maybe a bit on the left. The one I decided to write an exploit for is this 1917 reported on July 29th, and then exploits sent on August 9th. Yeah, mostly I decided to use this one because it seemed the most convenient. I do think many of the other ones could be exploited in a similar way, but not quite as nice. Would maybe take some more heap manipulation, et cetera. So then Apple first pushed the mitigation quite quickly, which basically blocks this code from being reached over iMessage. In particular what they did is say exactly no longer allow subclasses to be decoded in iMessage. So that's quite a good mitigation of blocks of maybe 90% of the attack surface here. So then they fully fixed it in iOS 13.2, but again after August 26th, this was only just local attack surface. So what is the bug? It's some initialization problem during decoding. The vulnerable class is shared key dictionary, which again it's a subclass of NS dictionary, so it's allowed to be decoded. So take a look at that. So yeah, shared key dictionary. Here's some pseudocode in Python. It's a dictionary, so its purpose is to look up keys to values or map keys to values. The lookup method is really simple. It just looks up an index in a key set. So every key dictionary has a shared key set. And then that index is used to index into some area. So that's quite simple. So most of the magic happens in the shared key set. And so what that does is something like computer hash of the key, use that hash to index into something called a rank table, which is an area of indices. And then if that index is valid, so it's being bounced checked against the number of keys, then it has found the correct index. And if not, it can recurse to another shared key set. So every shared key set can have a sub shared key set. And then it repeats the same procedure. So it already looks kind of complex. Why does it need this recursion? I'm not quite sure, but it's there. And so now we look at how this goes wrong. So this is the init with coder, which is the shared key set constructor used during decoding with the keyed unarchiver. And it looks pretty solid at first. It's really just taking the values out of the archive and then storing them as the fields of this shared key set. I'm going to go through the code here in single step to highlight where it goes wrong or what goes wrong here, what's wrong with this code. So we start with shared key set one, which implies there's going to be another one. And at the start, it's all zero initialized. It's basically being allocated through Coloc. So everything is zero. Then we execute the first line, okay? So num key, you see some interesting values coming. So far, this is all fine. Note that you can set num key at this point, num key can be anything because it's only being validated in three lines further down, right? It's making sure that num key matches the real length of this array. So this is fine. But here, it's now recursing and it's decoding another shared key set. So we start again. We have another shared key set, all filled with zeros. And we start from the top. Again, num key is one. So this is a legit, legitimate shared key set. Decoding a rank table. And here, we are making a circle. So for shared key set two, we pretend that its sub key set is shared key set one. And this actually works. So the NSD Anarchiver has special handling to handle this correctly. So it does not create a third object. And it makes the cycle and we're good to go. Next it decodes the keys array. So this is fine. Shared key set two seems legitimate so far. And now it's doing some sanity checking where it's making sure that this shared key set can look up every key. And so it does this for the only key it has, key one. Now at this point, it's, again, remember, it's going, like, it's hashing the key going into rank table. It takes out 42, which is bigger than num key. So in this case, the look up here has failed. And now it's recursing to shared key set one, right? This was the logic. And at this point, it's taking out this hex for one, for one, for one, for one as index, compares it against hex FFFF, and that's fine. And now it's accessing null point null pointer, which is keys, the keys array is still null pointer plus, well, for one, for one, for one, for one times eight. So at this point, it's crashing. It's accessing invalid memory precisely because in this situation, shared key set one hasn't been validated yet. Okay. So that's the bug we're going to exploit. I have these checkpoints just to think where we are. So we now have a vulnerability in this NSUN archive API. We can trigger it through through iMessage. So what exploit primitive do we have? Let's take a look again at the look up function, which we saw before. So here, where it's bold, this is where we crash, keys is null pointer, index is fully controlled, so we can access null pointer plus offset. And then what happens is the result of this memory access is going to be used as some Objective C object. So this is all Objective C in reality. It's doing some comparison, which means it does something like it calls a message called is NS string, for example, and then also eventually it calls the unlock, which is the destructor. So yeah, we have, we can, like the thing it reads from whatever, it will treat it as an Objective C object, call some message on it, and that's our exploitation primitive. Okay. So here we are. How do we exploit this? So the rough idea for exploiting such vulnerabilities looks like this. You want to have some fake Objective C object somewhere in memory that you're referencing. So again, we have an, we can access an arbitrary absolute address. We want some fake Objective C object there. Every Objective C object has a pointer to its class. And then the class has something called a message table, which basically has the function pointers to these methods, right? And so if we fake this entire data structure thing, so the fake object and the fake class, then as soon as the process calls some message on our fake thing, we get code execution. So we get control over the instruction pointer and then it's game over. So that's going to be our goal for this exploit. So here we have two different types of addresses. On the left side, we have heap addresses or data, really. And on the right side, in this NS string thing, we need library addresses or code addresses simply because on iOS you can't have, you can't have writable code regions. So we have to necessarily reuse existing code to something like Rob also. So we need to know where libraries are mapped for this. Right, and this is exactly the problem we're going to face now because there's something called ASLR, address space layout randomization. And what it does is it will randomize this entire address space. So on the left side, you can see how a process looks like, how the virtual memory of a process looks like before ASLR. And there everything is always mapped at the same address. So if you start the same address twice on different phones, maybe even without ASLR, the same library is at the same address. The heap is always at the same address stack. Everything is the same. And so this would be really simple to exploit now because, well, everything is the same. With ASLR, everything is shifted. And now all the addresses are randomized and we don't really know where anything is. And so that makes it harder to exploit this. So we need an ASLR bypass is what this means. We're going to divide it into two parts. So the heap addresses, we get them from in a different way than the library addresses. So let's see how we get heap addresses. It's really simple. Honestly, it's what you can do is heap spraying, which is an old technique. It's, I think, 15 years old, maybe. And it does still work today. The ideas that you simply allocate lots of memory. So if you look at this code snippet on the right, which you can use to test that, what it does it is it allocates 256 megabytes of memory on the heap with malloc. And then afterwards, there's one address or there's many addresses. But in this case, I'm using this hex 1170s where you will find your data at. OK, so just spraying 256 megabytes lets you put controlled data at a controlled address, which is enough for the for this first part of the exploit. The remaining question is how can you heap spray over iMessage? That's a bit more complicated. But it is possible because NSKeedAnarchiver is great. And it lets you do all sorts of weird stuff, which you can abuse for heap spraying. So yeah, blog posts will have more details. OK, so we have these, the heap addresses. We have them. We need the library addresses. Let's go back to the virtual memory space. On iOS and also on macOS, the libraries, so maybe in this case, so three libraries, but in reality, it's like hundreds of system libraries. They are all pre-linked into one gigantic binary blob, which is called the DYLDSharedCache. The idea is that this speeds up like loading times because all the interdependencies between libraries are resolved pretty much at compile time. But yeah, so we have this gigantic binary blob and it has everything we need. So it has all the code, it has all the ROP gadgets, and it has all the Objective-C classes. So we have to know where this DYLDSharedCache is mapped. If you dig into that a bit or if you look at the documentation or the binaries, you can find out that it is going to be mapped always between these two addresses. So between hex 1, 8, 7, 0 and hex 2, 8, 7, 0, which leaves only a 4 gigabyte region. So it's only being mapped in these 4 gigabytes. And then the randomization granularity is also hex 4,000 because iOS uses large pages. So it can only randomize with page granularity and that page granularity is hex 4,000. But really what's most interesting is that on the same device, the DYLDSharedCache is only randomized once per boot. So if you have two different processes on the same device, the shared cache is at the same virtual address. And if you have one process and it crashes and you have another one and so on, like the shared cache is always going to be at the same address. And that makes it really interesting. And also it's 1 gigabyte in size. It's gigantic, so it's not too hard to find in this 4 gigabyte region. Right, so this is what our task has boiled down to at this point. We have this address range, we have the shared cache. And all we need to know now is what is this offset. So let's make a sort experiment. Let's say we had an oracle, which we could give an address and it would tell us if this address is mapped in the remote process, okay? If we have this, it suddenly becomes really easy to solve this problem. Because then all you have to do is you go in one gigabyte steps, the size of the shared cache, between these two addresses. And then at some point you find a valid address. So maybe here after three steps you find a valid address. And then from there you just do a binary search, right? Because you know that somewhere between the green and the second red arrow, the shared cache starts. So you can do a binary search and you find the start address in logarithmic time in a few seconds, minutes, whatever. So obviously the question is, well, where would we get this oracle from? This seems kind of weird. So let's look at message receipts. So iMessage, like many other messengers, I think pretty much all of them that I know, send receipts for different things. iMessage in particular has delivery receipts and read receipts. Delivery receipts says the device received the message. Read receipt means the user actually looked, opened the app, looked at the message. You can turn off read receipts, but as far as I know you cannot turn off delivery receipts. And so here on the left you see a screenshot, three different messages were sent and they have three different states. The first message was marked as red, which means it got a delivery receipt and a read receipt. The second message is marked as delivered, so it only got a delivery receipt. And the third message doesn't have anything, so it hasn't received any receipts. Okay, so why is it useful? Here on the left is some pseudocode of iM agents handling of or like how it handles messages and when it sends these receipts. And so you can see that it first like parses the P list that's coming in and it's then doing this NS unarchiving some at some later time. And this is exactly why our bug would trigger during NS unarchive. And only then does it send a delivery receipt, right? So what that means is if during our NS unarchive, if we can trigger the bug and cause a crash, then we have somewhat of a one bit side channel, right? Because if we cause a crash, then we won't see a delivery receipt. And if we don't cause a crash, then we see a delivery receipt. So it's a one bit of information. And this is going to be our oracle, right? So ideally, you have a vulnerability that gives you this perfect oracle of is an address mapped or not. So crash if it is not mapped, don't crash if it is mapped. In reality, you probably will not get this perfect oracle from your bug. On the left side, you see the real oracle function for this vulnerability, which is, well, it has to be mapped, okay? But then it's also using the value that it's reading. And so it will only not crash if the value is either zero or if it has the most significant bit set, that is some pointer taking stuff. Or if it's a real legitimate pointer to an object, if the object. So this oracle function is a bit more complex. But the similar idea still works. So you can still do something like a binary search and then further shared cache start address in logarithmic time, right? And so it only takes maybe five minutes or so to do this. But yeah, for this part, again, I have to refer to the blog post, which will cover how this works. Okay, so this is the summary of the remote ASR bypass. Two phases, just like linear scan, where it's just like scanning, sending these payloads and checking if it gets the receipt back. And the first time it gets the receipt back, it knows, okay, this address is valid. I now found an address that is within the shared cache. And at that point, it starts just like searching phase, which in logarithmic time figures out the exact precise starting address. Okay, so there's a few common questions about this that I want to briefly go into. The first maybe obvious question is, can you really just crash this agent, like 20 plus times? And the answer is yes. There's no indicator or anything that the user would see that this demon crashes. The only thing you can do is you can go into settings, privacy, something, something, crash lock, something. And then you can see these crash locks. Second question is you can, I think by default, the iPhone is configured to send crash locks to the vendor, to Apple. So isn't that a problem? So I think I looked at this briefly. What I stumbled across was that it seems that iOS collects at most 25 crash locks per service. It's not, this is not designed to be like a security feature, right? So this makes sense. But what that means is that an attacker can use some kind of, well, resourcing exhaustion bug to crash this demon maybe 25 times first, and then only start the exploit. And then no trace of the exploit will be sent over. Third question is whether this can be fixed by simply sending the delivery received very early on. I think this is, this was my first suggestion to Apple to like just well, send this delivery received right at the start. Eventually I figured out it doesn't really work because you can still make some kind of timing side channel because when a demon crashes multiple times, it's subject to some penalty and it will only restart like a few seconds or even minutes later. So from that, from the timing of getting a delivery received, you can then still basically get this oracle, right? So it doesn't really work by just sending it earlier. I'll go into some other ideas that might work later. Okay. So at this point, I'm starting the demo. The demo is two parts. Let's see where it is. Um, right. So I have this iPhone here and you can with quick time, this, the, the screen is mirror to the projector. Um, so this iPhone is, it's a tennis. So it's from last year. Uh, it's on 12.4, which is the last vulnerable version. So that's like half a year old at this point. And what else? So there's no, no, no existing chats open. Okay. And let's see. So, yeah, I hope the wafer works. What you can see here is, um, the way the exploit works, it's, it's hooking with Friday, um, into, do we get deliveries? See, do we, yeah, okay, cool. It works. So yeah, it's popping up these messages. Um, the way the exploit works, it's hooking the messages app on macOS with Friday and then it's sending these like specific marker messages, like inject ATI, um, and then the Friday hook, uh, replaces this message with like the current payload, right? And now it's testing these addresses. Um, it's not too slow, I guess. Um, yeah, and it's popping up some nice messages. Okay. Oh, it already found. Okay. So this is already the end of the first stage. So that was quite fast. Um, it found a valid address in this like first probing step. And now it has 21,000 candidates for the shared cash base. Um, and now it's doing this kind of binary search thing to half that in every step. Okay. Now it only has 10,000 left. Um, and so it's quite fast and quite efficient. Okay. While this runs, um, let's continue. So this is where we are, um, we can now create fake objects. We have all the addresses we need. It's like this 1170 is where we can place our stuff. Um, and then we will gain control over the program counter, um, and from there it's standard stuff, right? It's what you would do in all of these exploits. You pivot maybe to the stack, you do return oriented programming, and then you can run your code, um, and you've succeeded. Now at this point, there's another thing coming in, um, point authentication is a new security feature that Apple designed and implemented first in the tennis of this device, uh, from 2018. Um, and the idea is that you can now, so this needs CPU support, the idea is that you can now, um, store a cryptographic signature in the top bits of a pointer. Okay. So here on the very left side, you have a raw pointer. So the top bits are zero because the way the address space works, um, now there's a set of instructions that, um, sign, um, a pointer and they will take a, maybe they can take a context or not. Um, but they use some key that's not in memory. That's in a register, um, computer signature of this pointer and store the signature in the top bits. And that's what you see on the right side, the green things, uh, that's the signature. And now before using this pointer, the code will now authenticate, um, by, by running another instruction. And this instruction, if the verification fails, it will basically clobber this pointer, make it invalid. And then the following instruction will just crash. Right. So here this is the function called the BL, uh, branch and link instruction. This is doing a function called to a function pointer. But first it's authenticating this pointer. And if this authentication step fails, then the process will crash right there. Um, what this means for an attacker is that more or less Rob is dead, uh, because Rob involves faking a bunch of function point or like well, code pointers really that point in the middle of existing code. Uh, so this is no longer possible because an attacker cannot generate, uh, these signatures. So this is where our exploit breaks, right? The red thing, um, where we have a fake objective C class, um, with our own function pointer, this does no longer work because we cannot compute these signatures. So what do we do? Um, one thing that's still possible and it's even documented in the documentation is that this class pointer in the object, it's also called the ISAR pointer, um, is not protected by pack in any way, which means we can fake instances of legitimate existing classes, right? So in this case here, we can have a fake object that points to a real class that has real legitimately signed, uh, method pointers. So this still works. And with this, we can now get existing methods. Called, uh, like out of place and like kind of manipulated control. So, and these existing methods are basically now gadgets. So if you want to think about it that way, um, okay. So what can we do with this? Um, one very interesting method we can get called is dialogue, the destructor. So I think in quite a few, maybe most of the objective C exploitation scenarios, you can probably get a dialogue method called. Um, now what you do is you just enumerate all the destructors in the shared cash. There's tons of them. I think 50,000, um, and you can get any of those called, um, and then one of them, or a few of them are really interesting because they call this invoke method, which is part of the NS invocation object or class, um, and an NS invocation is basically a bound function. So it has a target object, the method to be called and all the arguments. And as soon as you call invoke on this NS invocation, it does this method call with fully controlled arguments, right? So what that means is with this destructor, uh, we can now make a fake object with a fake NS invocation that has any method call we would like to perform. And then it's going to do that, uh, because it run, it's running this invoke here. Um, again, you see this shield here, which I put in place, um, for things that Apple has hardened since we sent them to exploit. So what they did so far is they hardened NS invocation and it's now no longer easily possible to abuse it in this way. But yeah, so for us, we can now run arbitrary objective C methods with controlled, um, arguments. What about sandboxing? Um, if you do some more reverse engineering and figure out what services play into iMessage, this is what you end up with on the right side. So you have a number of services. Um, most of them are sandbox. If you had the red border means there's a sandbox. Um, interestingly, springboard also does some NS unarchiver stuff. So it's decoding the BP key. Uh, so it's also, it could also trigger our vulnerability and springboard is not sandboxed. So it's the main UI process. It's basically what's handling the, like showing the, um, the welcome screen and so on. Um, and so what that means is, well, we can just target springboard and then we get code execution outside of the sandbox. So we don't actually need to worry too much about the sandbox as of IRS 13. This is fixed and, um, this key is now decoded in the sandbox. Cool. So we can execute objective C methods outside of the sandbox. Uh, we can with that access user data, activate camera, microphone, et cetera. This is all possible through objective C quite easily. But of course we don't care about that. What we want is a calculator. Um, and this is also quite easy with one objective C call, uh, UI application launch application, blah, blah, blah. And so let's see if this works. Go back to the demo. Um, so where are we at? So the, uh, the ASR bypass runs through. You can nicely see that it roughly halved the candidates in every round or with every message it had to send. It ended up with just one message, uh, just, yeah, well, it was just one candidate at the end. And that is, um, the shared cache base. In this case, hex one, eight, eight, six, zero, eight, zero, zero, zero. Um, now it's preparing the heap spray. This is all kind of hacked together. Um, I think if you wanted to do this properly, uh, I, so for one, you can send the whole heap spray in one message. I'm just lazy. Um, it's also probably way too big. Um, another thing is, I think you would probably not target springboard in reality just because springboard is very sensitive. So if you crash it, you get this respring and the, like UI restarts. So I think in reality, you would probably target I am agent and then chain a sandbox escape. Um, because well, this, this bug would also get you out of the sandbox. So looks should be doable. Okay. So I think the last message arrived, it's freezing here for a couple of seconds. I don't actually know why I never buzzed, but it does work. Um, yeah, so that there was a demo. Um, it's, it's kind of naturally reliable, this exploit, uh, because there's no, not much of heap manipulation involved, except this one heap spray, which is controllable. Okay. Um, so what's left? Um, I think one more thing you can do is you can attack the kernel. If you want that, uh, you have to deal with two problems here. One is code signing. You cannot execute unsigned code on iOS. Um, and then the standard workaround for that is you abuse jit pages in Safari, but we're not in Safari or we're not in web content. So we don't have jit pages. Um, what I did here is I basically pivoted into JavaScript core, which is the, the JS library. You can use it from, from any app also. Um, and then I'm just bridging this calls into JavaScript and then implementing the kernel exploit in JavaScript. Um, this does not require any more vulnerabilities, so you do not need a JavaScript core bug to do this. Um, and the idea is very similar to point.js. Maybe some of you know about that. It's a library, I think initially developed for edge because they did something similar with like jit page hardening. So what I decided to do is take stock puppet from net or CVE 2018, 8, 6, 0, 5, which works on this version works on 12.4. Um, this is the trigger for it. And I only ported the trigger. I didn't bother re-implementing the entire exploit. Um, so yeah, this is the trigger. It will cause a kernel panic. Um, it's quite short, which is nice. So if you want to run this from JavaScript, um, really, there's only three things you care about, right? So the first one is you need the syscalls. Okay. So highlighted here, there is like four or so different syscalls here, not a lot, and you just have to be able to call them from JavaScript. Um, the other thing is you need, uh, constants, right? So I have INAT 6, SOC stream. These are all integer constants. So this is really easy, right? You just need to look up what these values end up being. Um, and then the last thing is you need some data structures. So in this case, I need this S O N P extension thing. Um, it needs some integer value to pass pointers to and so on. Um, yeah. And then this is kind of the, the magic that happens, uh, you take sockpuppet.c, extract these syscalls, et cetera. Um, there's one objective C message you can call, which is very convenient, which gives you a dl sim. Um, what this lets you do it, it lets you get, um, native C function pointers that are signed. Right. Because so far we can only call objective C methods, but we need to be able to call syscalls or at least the C wrapper functions. So with this dl sim method thing, we can get signed pointers to, um, C functions. Then we need to be able to pivot into JavaScript call, which is also really easy with one method call, the JS context evaluate script. Um, we need to mess around with memory a bit, like corrupt some objects from outside, corrupt some area buffers in JavaScript, get read write, um, kind of standard browser exploitation tricks, I guess, um, but yeah, so if you do this, what you end up with is sockpuppet.js looks very similar. Um, you can see a bit of my JavaScript API that lets you allocate memory buffers, uh, read and write memory, um, have some integer constants. And yeah, apart from that, it doesn't really look much different from the initial trigger. And so this can now be served over, well, staged onto the iMessage exploit, um, building on top of this objective C method called primitive. And I guess at least in Syria, I didn't fully implement it. This should be able to just run a kernel exploit and, well, fully, um, compromise the device without any interaction in probably less than 10 minutes. Okay, so this was the first part. How does, how does this exploit work? Um, what I have now is a number of suggestions, how to make this harder, how to improve things. So one of the first things that, uh, that is really critical for this exploit is the ASLR bypass, uh, which relies on a couple of things. And I think a lot of this ASLR bypass also works on other platforms. So Android has a very similar problem with, like, um, mappings being at the same address across processes, um, and other messengers have these, like, received and so on. So I think a lot of this applies not just to Apple, but to Android and to other messengers. But okay, what is the first point? So weak ASLR, this is basically the heap spraying, which is just too easy. This shouldn't be so easy. Um, in terms of theoretical ASLR, you can see it may be sketched here on the right. Uh, in Syria, ASLR could be much stronger, much more randomized. In reality, it's just like the small red bar. So really it should just have much more entropy, um, to make heap spraying not viable anymore. The next problem with ASLR is this per boot stuff. Um, again, at the bottom, you can see it right. So you have three different processes. The shared cache is always at the same address. Similar problems on other platforms. I mentioned that. This is probably hard to fix, um, because by this point, a lot of, uh, quite a lot relies on this, and it's, it would be a big performance hit to change this. But maybe some clever engineers can figure out how to do it better. The third part here is the delivery receipts, um, which interestingly, they can give you the side channel. There's one bit information side channel, and this can be enough to break ASLR. And as I've mentioned before, I think a lot of other messengers have this same problem. Um, what, what might work is to either, well, remove these receipts, sure. Um, or maybe send them from a different process. So you can do this timing thing or even from the server. I think if you send them, if the server already sends the delivery receipt, it's a bit of cheating, but at least this attack doesn't work. Um, sandboxing, another thing. It's probably obvious, right? So the everything that's on zero click attack surface should be sandboxed as much as possible. Um, of course, to, you know, require the attacker to do another full exploit after getting code execution. Um, but sandboxing can also complicate information leaks. So Natalie had this other iMessageBug, CVE, something 8646. There's a blog post about this one. Um, it basically lets you, uh, she was able to send, to cause a springboard to send HTTP requests to some server. And those would contain pictures, data, whatever. Um, if springboard would have been sandboxed to not allow network activity, this would have been much harder. So sandboxing is not necessarily just about this second breakout. Um, what I do want to say about sandboxing that it shouldn't be relied on. Uh, so I think this remote attack surface is pretty hard and it's not unlikely that it's actually harder than the sandboxing attack surface. Um, and also on top of that, this bug, the NSKID Anarchiver bug, it would also get you out of the sandbox. Um, because the same API is used locally for IPC. So there's that. Yeah, this would be nice if the zero click attack surface code would be open source, uh, would, well, would have been nice for us, would have been easier to audit. Maybe some day. Um, another feature that I would like to see or another theme is, um, reduce zero click attack surface, make it one click, at least one click attack surface, right? So before and here you, uh, you could see that an unknown sender can send any messages. It would be nice if there would be some pop up that's like, well, do you actually want to accept messages? Trima lets you block unknown senders. I think that's a cool feature. So yeah, there's more work to be done here. Um, also this restarting service problem, I think, uh, it, it could get bigger even. So here we had pretty much unlimited tries for the ASLR bypass. Um, it's probably going to become even more relevant with memory tagging, which will could can also be defeated if you have many tries. So yeah, I guess if, if some process or some like critical demon crashes 10 times, maybe not restarted, I don't know. Um, it's, it's going to need some more thinking, right? You don't want to denial of service the user, uh, by, by just not restarting this demon that crashed for some unrelated reason. But yeah, this would be a very good idea to have some kind of limit here. Okay, conclusion. Um, so yeah, zero click exploits. They are a thing. They do exist. It is possible to, to exploit single memory corruption bugs on this surface. Um, with, you know, with out separate info leaks, uh, despite all the mitigations we have, uh, however, I do think by turning the right knobs, this could be made much harder. So I've, I gave some suggestions here. Um, and yeah, we need more attack service reduction, especially on this zero click surface. Uh, but I think there is progress being made. And with that, thanks for your attention. Um, and I think we have time for questions. Thank you. We do have time for questions. And if you're in the room, you should line up, uh, at the microphones and then we might also have questions from the internet. One quick reminder is that all fun things that they work with explicit consent. That includes photos. So the photo policy of the CCC is that if you take a photo, you need to have explicit consent by the people in the frame. So remember, don't do any long shots into the crowd, uh, because you want to have the consent of everybody there. Good. We have the first question from the internet. Uh, the internet wants to know, did Apple give you some kind of a reward and was it a new iPhone? Uh, no, we did not get any kind of reward, but we also didn't ask for it. Uh, so there's that. Um, if I didn't, no, I didn't get a new iPhone, uh, but I'm still using mine, uh, which is it? Yeah. I mean, this is a 10 s, right? So current, current, um, hardware models, uh, can be defeated with this? If that is a question. Good. We have a question from microphone number three. Uh, hello. Uh, just a question. I did not really understand how the fix with the, um, server or having another process, uh, sending the delivery message will fix the problem because if it does work, if you're in the right addresses, uh, the thing just will work and make the server or the process send the delivery message. And if it crushes, it doesn't do anything. So. So the idea would be, um, in this case, I'm like sending this one message that would crash and then either I get a delivery receipt or I don't. If the server already sends the delivery receipt before it actually like gives the message to the client or to the receiver, then I would always see a delivery receipt and I wouldn't be able to figure out if my message calls to crash or not. Right. So that's the idea, um, behind maybe sending it on the server side. If that makes sense. Yeah. But in this case, if, uh, legit, uh, legit people send a message and it doesn't reach the people because yeah, yeah, it's a hack, right? So it's not perfect. I mean, the server could, um, only sense it, then the delivery receipt once it, uh, like send it out over TCP and maybe got a TCP arc or whatever, which happens in the corner. Um, but it's a hack in any case. Yeah, like it's a tradeoff. We have a question for microphone number two. Hello. Okay. Thanks for the talk. Um, two questions. First is OS 10. Also a potential candidate for this bug. And second, can you distinguish multiple devices with your address based randomization detection? Mm hmm. So yeah, OS or Mac OS is affected just the same. I think this specific exploit wouldn't directly work because address space is like, looks a bit different, but I think you could make it work and it's affected. Um, in terms of multiple devices, so I haven't played around with that. Um, I could imagine that it is possible to somehow figure out that there are multiple devices or that, you know, which device just crashed. Uh, but I haven't investigated the answer. Thanks. We still have time for more questions. There was a question from microphone number one. Uh, hi. Thanks for the talk. Uh, quick question. You said that exploitation could be made without, uh, having any notification. Uh, how would that be made? Yeah, so I briefly looked into how, uh, it could work. Um, well, so for one, you can like take out parts of the message so that it fails parsing later on in the processing and then it will just be like thrown away because it says, well, that's a garbage. Um, the other thing is, of course, once you get with the like very last message where you get code execution, you can prevent it from showing a message like a notification because that happens afterwards. But until you get the code execution, you can't remove them and so you see the first message. Yeah, but you can do the other, um, the other thing, like make, make the message look bad, bad enough that like later parsing stages will throw them away. Thanks. Yeah. Good. We have a couple of more questions. Remember, if you don't feel comfortable lining up behind the microphones, you can ask through the signal angel through the internet. Microphone number four, please. Yes. Hi. Hi, Samuel. Um, I was cute. You have some suggestions about reducing the attack surface. Are there any suggestions that you'd make to say like Apple or Google, you know, uh, in terms of what they can see, you mentioned logging a little bit earlier. Yeah. So I sent, uh, pretty much this list with the exploit I sent to Apple. Um, and I think the blog post will have a bit more. Um, but yeah, so I told them the same things. Yeah. If that's your question, did I get it right? Yes. I mean, maybe I misunderstood a little bit, but I suppose that some of these reductions to the tax surface seem to be in terms of like what's happening on the device. Yeah. Um, whereas I'm wondering in terms of monitoring. So being able to catch something like this in progress. Right. Right. So this is going to be really hard because of end-to-end encryption. So the server just sees like encrypted garbage and has no way of knowing is this an image? Is this a text? This is an exploit. Um, so on the server, uh, I, I don't think you can do much there. I think it's gonna, it's, it's gonna have to be on the device. We have a question from the internet. How do you approach a tax surface mapping? Um, well, reverse engineering, uh, playing around, looking at this message format. In this case, it was somewhat obvious what a tax surface was, right? So figure out which keys of this message are being processed in some way, make a note, decide which one looks most complex. Go for that first. That's what we did. We have a question from microphone number two, please. Hi. How long did you and your colleague research to get the exploit running? Uh, so the, the vulnerability finding thing was not only, I think we spent maybe three months finding that, um, the exploit. So I had a rough idea how I wanted to, how, how I would approach this exploit. So I think at the end, it took me maybe a week to finish it. Uh, but I had thought about doing that for like, while looking, while still looking for vulnerabilities and those two to three months. We have another question from microphone number three. Um, is there the, uh, threat that an attack iPhone would itself turn into a tech up by the exploit? Sure. Yeah. You can do that. Um, I mean, you have full control, right? So you have access to the contacts list and you can send out I messages. The question is if it's necessary, right? I mean, you can also send messages from, you don't really need them, the iPhone to send the messages. Um, but I think in theory, yes, that's possible. Do we have more questions from the internet? That's the phone. Stay compromised after you start. So there's no persistence exploit here now. Um, you will need another exploit. Uh, little I loaded a talk, I think just an hour ago about persistence. So you would need to chain this with, well, for example, the exploit that he showed. And if you have questions in the room, please line up behind the microphones. Do we have more questions from the internet? Yes. So, uh, you've achieved the most novel bug ever found to be fine in iOS. What's the next big thing you'll be looking at? Good question. I don't really know myself, but I'm going to stay probably around for zero click attacks of reduction for a bit more. Looks like we don't have any brave people asking questions in the room. Does the internet have more courage? How long does discovering explanation and development take and how much does the team work improve the process and development time? Um, okay. So how long does this exploitation process work? Is that the first question? Yes. Yeah. I mean, this is generally a hard thing to answer, right? There's like years of hacking around and learning how to do this stuff, et cetera, that you have to take into account. Um, but as I said, I had a rough idea how this exploit would look like. So then really implementing is what like one or two weeks. Um, the, the initial part of reverse engineering, I messaged reverse engineering, this NS key done archival thing. This took forever. This took many months. Um, and it was also very necessary for exploit writing, right? So a lot of the exploit primitives I use, they also abuse, um, the NS key done archival thing. We have time for perhaps two quick questions for microphone phone number four, please. Super. I'm not super familiar with, uh, iOS is virtual memory address space, but you showed two heap regions when you showed the picture of it. And I'm wondering why are there two heap regions? Okay. Yeah, this is only a minor detail. Um, but I think there's one heap region initially, like, uh, below the shared cache. And once that is full, it just makes another one above it. So it's really just like if the one, uh, gets used, all gets, uh, gets used up and makes another one. And that's going to be like above the shared cache. I think that's the picture you're referring to. Yeah. Thank you. And unfortunately we are out of time. So the person that might have for number one, please come up to the stage afterwards and perhaps you can grab a talk. So please give round of warm. I can't say this. Exactly. Thanks.