 All right, let's start with our next talk in the security track of the Chaos Communications Congress. The talk is called Jailbreaking iOS from past to present done by Dimster. He spoke at the 32nd C3 already and researched on several jail breaks like the Phoenix or the jailbreak time for the Apple Watch. And he's going to talk about the history of jail breaks. He's going to familiarize you with the terminology of jailbreaking and about expert mitigations and how you can circumvent these mitigations. Please welcome him with a huge round of applause. Thank you very much. So hello, Roman. I am Dimster. And as I said, I'm going to talk about jailbreaking iOS from past to present. And the topics I'm going to cover is what is jailbreaking. I'm going to give an overview in general. I'm going to introduce you to how jailbreak started, how they got into the phone at first and how all of these progressed. I'm introducing to you the terminology which is tethered, untethered, semi-tethered, semi-untethered jailbreaks, like stuff you've probably heard, but some of you don't know what that means. I'm going to talk a bit about hardware mitigations which were introduced by Apple, which is KPP, KTRR, and a little bit about PAK. I'm going to talk about the general goals of the technical goals of jailbreaking and the kernel patches and what you want to do with those and brief overview how jailbreaking could look like in future. So who am I? I'm Dimster. I got my first iPod touch with iOS 5.1. And since then I pretty much played with jailbreaks and then I got really interested into that and started doing my own research, eventually started doing my own jailbreaks, kind of started with downgrading. So I've been here two years ago with my presentation downgrading. I was downgrading from past to present. I kept hacking since then. So back then I kind of talked about the projects I made and related to downgrading which was CSS checker, Future Restore, IMG4 tool, probably have heard of that. And since then I was working on several jailbreaking tools ranging from iOS 8.4.1 to 10.3.3, among those 32-bit jailbreaks, untethered jailbreaks, remote jailbreaks like me, and the jailbreak for the Apple Watch. So what is this jailbreaking I'm talking about? So basically the goal is to get control over the device you own. You want to escape the sandbox where the apps are put in. You want to elevate the privileges to root and eventually to kernel. You want to disable code signing because all applications on iOS are code signed and you cannot run unsigned binaries. You pretty much want to disable that to run unsigned binaries. And the most popular web people are jailbreak is to install tweaks. And also a lot of people install a jailbreak or jailbreak that devices for doing security analysis. For example, if you want to pan test your application, see how an attacker goes for it, you want to like debug that stuff and you want to have a jailbreak on phone for that. So what are these tweaks? Tweaks are usually modifications of built-in user space programs. For example, one of the programs is Springboard. So Springboard is what you see if you turn on your phone. This is where all the icons are at. And usually you can install tweaks to, I don't know, modify the look, the behavior, add functionality. Just this customization, this is how it started with jailbreaking. What is usually bundled when you install jailbreak is Cydia. So you install DPKG and APT, which is the DBN package manager. And you also get Cydia, which is the user-friendly graphical user interface for the decentralized or centralized package installer, package system. So I'm saying centralized because it's pretty much all in one spot. You just open the app and you can get all your tweaks. Or it's also decentralized because you can just add up your own repo. You can make your own repo. You can add other repos. And so you're not kind of tied to one spot where you get the tweaks from, like, when the app store, you can only download it from the app store. But with Cydia, you can pretty much download it from everywhere. You're probably familiar with DBN, and it's pretty much the same. So this talk is pretty much structured around this tweet, the ages of jailbreaking. So as the user said, we get like the golden age, the bootrom, the industrial age, and the post-apocalyptic age. And I kind of agree with that. So this is why I decided to structure my talk around that and walk you through the different ages of jailbreaking. So starting with the first iPhone OS jailbreak back then was actually called iPhone OS, not iOS, was not the bootrom yet. So the first was a buffer overflow in the iPhone's Lib Tip library. And this is an image parsing library. And it was exploited through Safari and used as an entry point to get code execution. It was the first time that non-Apple software was run on an iPhone, and people installed applications like Installer, AppTap, which were stores similar to Cydia back then. And those were used to install apps or games because for the first iPhone OS, there was no way to install applications anyhow. So the apps were introduced with iOS 2. So then, going to the golden age, the attention kind of shifted to the bootrom. People started looking at the boot process. And they found this device from our upgrade mode, which is a part of ROM, which, yeah, so the most famous bootrom exploit was Limerain by Geohot. It was a bug in hardware and was unpatchable with software. So this bug was used to jailbreak devices up to the iPhone 4. And there were also several other jailbreaks. We didn't rely on that one. But this one, once you discover it, you can use it over and over again, and there's no way to patch that. So this was later patched in the new hardware revision, which is the iPhone 4S. So with that bootrom bug, this is how tattered jailbreaks became a thing. So Limerain exploits a bug in the DFU mode, which allows you to load unsigned software through USB. However, when you reboot the device, a computer is required to re-exploit and again load your unsigned code, and then load the bootloaders, load the patch kernel. And thus, the jailbreak was kind of tattered to the computer because whenever you shut down, you need to be back at the computer to boot your phone up. So historically, a tattered jailbroken phone does not boot without a computer at all. And the reason for that is because the jailbreak would modify the kernel and the bootloaders on the file system for performance reasons. So when you do the actual tattered boot, you would need to upload a very tiny payload via USB, which then in turn would load everything else from the file system itself. But this results in a broken chain of trust when the normal boot process runs and the bootloader checks the signature of the first-stage bootloader, that would be invalid, so the bootloader would refuse to boot that, and it would end up a DFU mode, so basically a phone won't boot. Sometime around then, the idea of a semi-tattered jailbreak came up, and the idea behind that is very simple. Just don't break the chain of trust for tattered jailbreaks. So what you would do differently is you do not modify the kernel on the file system, don't touch the bootloaders at all, and then when you would boot tattered, you would need to upload all the bootloaders, like the first-stage bootloader and the second-stage bootloader, which is iBoot, and then the kernel via USB to boot into jailbreak mode. However, when you reboot, you could boot all those components from the file system, so you could actually boot your phone into non-jabrock mode. If you don't install any tweaks or modifications, which modify critical system components, because if you tamper with, for example, the signature of the mount binary, the system obviously cannot boot in non-jabrock mode. So this is kind of the golden age, so let's continue with the industrial age. So with the release of the iPhone 4S and iOS 5, Apple fixed the bootron bug and essentially killed Limerain. They also introduced AP tickets and non-sys to bootloaders, which just I'm mentioning because it's kind of throwback for downgrading. Before that, you can have a phone, if you update to the latest firmware, and before you save your SHSH blobs, you could just downgrade and then jarrick again, which wasn't a big deal. But with that, they also added downgrade protection, so jarricking became harder. If you want to know more about how the boot process works, what SHSH blobs are, what AP tickets are, you should check out my talk from two years ago. I go in depth on how all of that works. So I'm skipping that for this talk. So the binary default boots are encrypted. So the bootloaders are encrypted. And until recently, the kernel used to be encrypted as well. And the key encryption key is fused into the devices, and it is impossible to get through hardware attacks. At least there's no public case where somebody actually got that recovered that key. So it's probably impossible. Nobody has done it yet. So all boot files are decrypted at boot by the previous bootloader. And before the iPhone 4S, you could actually just talk to the hardware IS engine as soon as you got kernel level code execution. But with the iPhone 4S, they introduced a feature where before the kernel would boot, they would shut off the IS engine by hardware. So there's no way to decrypt bootloader files anymore so easily unless you got code execution in the bootloader itself. So yeah, decrypting bootloaders is a struggle from now on. So I think, kind of because of that, the attention shifted to user land. And from now, the JABRAX kind of had to be untethered. So untethered here means that if you JABRAC your device, you turn it off, you boot it again, then the device is still JABRAC. And this is usually achieved through re-exploitation at some point in the boot process. So you can't just patch the kernel on file system because that would invalidate its signatures. So instead, you would, I don't know, add some configuration files to some demons which would trigger bugs and then exploit. So JABRAX then changed many bugs together, sometimes six or more bugs to get initial code execution, kernel code execution, and persistence. This somewhat changed when Apple introduced free developer accounts around the time they released iOS 9. So these developer accounts allow everybody who has an Apple ID to get a valid signing certificate for seven days for free. So you can actually create an Xcode project and run your app on your physical device. Before that, that was not possible. So the only way to run your own code on your device was to buy a paid developer account, which is like $100 per year if you buy a personal developer account. But now you can just get that for free. And after seven days, the certificate expires, but you can just for free request another one and keep doing that, which is totally enough if you develop apps. So this kind led to semi-unthesered jail rig, because initial code execution was not an issue anymore. Anybody could just get that free certificate, sign apps, and run some kind of code, which was sandboxed. So jail rig focused shifted to more powerful kernel bugs, which were reachable from sandbox. So you had jail rigs using just one single bug or maybe just two bugs. And the jail rigs then were distributed as an IPA, which is an installable app. People would download, sign themselves, put on a phone, and just run the app. So semi-unthesered means you can reboot into non-jail broken mode. However, you can get to jail broken mode easily by just pressing an app. And over the years, Apple stepped up its game constantly. So with iOS 5, they introduced ASLR, address space, layer randomization. With iOS 6, they added kernel ASLR. With the introduction of the iPhone 5S, they added 64 bit CPUs, which isn't really a security mitigation. It's just changed a bit how you would exploit it. So the real deal came with, started to come with iOS 9, where they first introduced kernel patch protection, an attempt to make the kernel immutable and not patchable. And they stepped up that with the iPhone 7, where they introduced kernel text. Street-only region, also known as KTRR. So with iOS 11, they removed 32-bit libraries, which I think has very little to no impact on exploitation. It's mainly in the list, because up to that point, Cydia was compiled as a 32-bit binary and that stopped working. That's why they had to be recompiled for 64-bit, which took someone to do until you could get a working Cydia on 64-bit iOS 11. So with the iPhone XS, which came out just recently, they introduced pointer authentication codes. And I'm going to go more in detail into these hardware mitigations in the next few slides. So let's start with kernel patch protection. So when people say KPP, they usually refer to what Apple calls Watchtower. So Watchtower, as the name suggests, watches over the kernel and panics when modifications are detected. And it prevents the kernel from being patched. At least that's the idea of it. It doesn't really prevent it, because it's broken, but when they engineer it, it should prevent you from patching the kernel. So how does it work? Watchtower is a piece of software which runs in Eel 3, which is the ARM exception level 3. So exception levels are kind of privileged separations, while 3 is the highest and 0 is the lowest. And you can kind of trigger an exception to call handler code in higher levels. So the idea of watchtowers, that recurring events, which is FPU usage, trigger a Watchtower exception of the kernel. And you cannot really turn it off, because you do need the FPU. So if you picture how it looks like, we have the watchtower to the left, which totally looks like a lighthouse. And the application's at the right. So in the middle in Eel 1, we have the kernel. And recent studies revealed that this is exactly how the X and U kernel looks like. So how KPP works. An event occurs from time to time, which is from user application. For example, JavaScript makes heavy user floating points. And the event would then go to the kernel. And the kernel would then trigger watchtower as it tries to enable the FPU. And watchtower would scan the kernel. And then, if everything is fine, it would transition execution back into the kernel, which then it would transition back into user space, which can then use the FPU. However, with a modified kernel, when watchtower scans the kernel in the text modification, it would just panic. So the idea is that the kernel is forced to call watchtower, because the FPU is blocked otherwise. But the problem at the same time is that the kernel is in control before it calls watchtower. And this thing was fully defeated by QWERTY in YALO 10.2. So how QWERTY's KPP PyBus works. The idea is you copy the kernel in memory, and you modify the copied kernel. Then you would modify the page tables to use the patch kernel. And whenever the FPU triggers in watchtower inspection, before actually calling watchtower, you would switch back to the unmodified kernel. And then let it run, let it check the unmodified kernel. When that returns, you would go back to the modified kernel. So this is what it looks like. We copy the kernel in memory. We patch the modified copy. We switch the page tables to actually use the modified copy. And when we have the FPU event, we would just switch the page tables back, make a forward to call to watchtower, make then watchtower scan the unmodified kernel. And after the scan, we would just return to the patch kernel. So the problem here is time of check, time of use, a classical talk to. And this works on the iPhone 5s, the iPhone 6, and the iPhone 6s. And it's not really patchable. However, with the iPhone 7, Apple introduced KTRR, which kind of proves that. And they really managed to do an unpatchable kernel. So how does KTR work? So kernel text treat only region I'm going to present as described by Zeguzza in his blog. Adds an extra memory controller, which is the IMCC, which traps all writes to the read-only region. And there's extra CPU registers which mark an executable range, which are the KTRR registers. And they obviously mark a subsection of the read-only region. So you have hardware enforcement at boot time for a read-only memory region, and hardware enforcement at boot time for an executable memory region. So this is the CPU. This is the memory at the bottom. You would set the read-only region at boot. And since that's enforced by the hardware memory controller, everything inside that region is not writable. And everything outside that region is writable. And the CPU got KTR registers, which mark begin and end. So the executable region is a subsection of the read-only region. Everything outside there cannot be executed by a CPU. Everything inside the read-only region cannot be modified. And this has not been truly bypassed yet. There's been a bypass, but that actually targeted how that thing gets set up. But that's fixed, and now it's probably setting up everything. And so far, it hasn't been bypassed. So jailbricks are still around. So what they are doing? Well, they just walk around kernel patches. And this is when KPP jailbricks evolved, which means, well, they just don't patch the kernel. But before we dive into that, let's take a look what previous jailbricks actually did patch in the kernel. So the general goals are to disable code signing, to disable the sandbox, to make the root file system writable, and to somehow make tweaks work, which involves making mobile substrate or lib substitute work, which is the library for hooking. And I was about to make a list of kernel patches, which you could simply apply. However, the techniques and patches vary across individual jailbricks so much that I couldn't even come up with the list of kernel patches among the different jailbricks I worked on. So there's no general set of patches. Some prefer to do it that way. Some prefer to do it that way. So instead of doing a full list, I just show you what the Helix jailbreak does patch. So the Helix jailbreak first patches the icon has debugger, which is a boot arc. It's a variable in the kernel. And if you set that to true, that would relax the sandbox. So to relax the sandbox or to disable code signing usually involves multiple steps. Also, since iOS 7, you need to patch mount because there's actual hard coded that the root file system cannot be mounted as read write. Since iOS 10.3, there's also hard coded that you cannot mount the root file system without the no suit flag. So you probably want to patch that out as well. And then if you patch both these, you can remount the root file system as read and write. However, you cannot actually write to the files on the root file system unless you patch lightweight volume manager, which you also only need to do in iOS 9 up to iOS 10.3. Later, when they're switched to APFS, you don't actually need that anymore. Also, there's a variable called proc and force. You set that to 0 to disable code signing, which is one of the things you need to do to disable code signing. Another flag is CS enforcement disable. Set that to 1 to disable code signing. So AMFI, which is Apple Mobile File Integrity, is a CAX which handles the code signing checks. So in the CAX, it imports the mem copy function. So there's a stop. And one of the patches is to patch that stop to always return 0 by some simple gadget. So what this does is whenever it compares something in a code, it would just always say that the compare succeeds and is equal. I'm not entirely sure what it does. So this patch dates back to Yalu. But just applying the patch helps killing code signing. So that's why it's in there. Another thing he links does is it adds the get task allow entitlement to every process. And this is for allowing retrial executable mappings. And this is what you want for mobile substrate tweaks. So initially, this entitlement is used for debugging because there you also need to be able to modify code at runtime for setting breakpoints, while we use it for getting tweaks to work. Since I was 10.3, Helix also patches label update exit VE patch, label update exit VE function. So the idea of that patch was to fix the process exec denied while updating label error message in Cydia and several other process. Well, that seems to completely nuke the sandbox and also break sandbox containers. So this is also the reason why if you general come with Helix, apps would save their data in the global directory instead of their sandbox containers. And you also kill a bunch of checks in Mac policy ops to relax the sandbox. So if you want to check out how that works yourself, unfortunately Helix itself is not open source and I have no plans of open sourcing that. But there's two very, very closely related projects which are open source, which is double Helix. This is pretty much exactly the same, but for 64-bit devices, which does include the KPP bypass, so it also patches the kernel. And jailbreak time, which is the watchOS jailbreak, but Helix is for iOS 10 and the watchOS jailbreak is kind of the iOS 11 equivalent, but it shares like most of the code. So most of the patch code is the same. If you want to check that out, check these out. So KPP less jailbreaks. So the idea is don't patch the kernel code, but instead patch the data. So for an example, we go for remounting the root file system. We know we have hard-coded checks which forbid us to mount the root file system read-write. But what we can do is in the kernel, there's a structure representing the root file system and we can patch that structure removing the flag saying that this structure represents the root file system. And we simply remove that and then we can call remount on the root file system and then we put back in the flag. So we kind of bypass the hard-coded check. For disabling code signing and disabling sandbox, there's several approaches. So in the kernel, there's a trust cache. So usually, Amphi handles the code signing, the demon user space handles the code signing requests, but the demon itself also needs to be code signing so you have the chicken and egg problem. That's why in the kernel, there is a list of hashes of binaries which are allowed to execute. And this thing is actually writable because when you mount the developer disk image, it actually adds some debugging things to it. So you can simply inject your own hash into the trust cache, making the binary trusted. Another approach taken by Jarek D in the latest ElectroJarek is to have a process, in this case, Jarek D, which would patch the processes on creation. So when you spawn a process, that thing would immediately stop the process, go into the kernel, look up the structure and remove the flags saying, kill this process when the code signature becomes involved. And it would invalid. And it would also add the get task allow entitlement. And then after it's done that, it would resume the process and then the process won't get killed anymore because it's kind of already trusted. And the third approach taken or demoed by Bazette was to take over Amphi D in user space completely. So if you can get a Mac port to launch D or to Amphi D, you can impersonate that. And whenever the kernel asks Amphi, is that trusted? You would reply, okay, yeah, that's trusted. That's fine, you can run it. So that way you don't need to go for the kernel at all. So future Jareks. Kernel patches are not really possible anymore and they're not even required because we can still patch the kernel data or not go for the kernel at all. But we're still not done yet. We still didn't go for post-apocalyptic and post-apocalyptic or short pack. Well, actually pack stands for point authentication code but you get the joke. So point authentication codes were introduced with the iPhone XS. And if we quote Qualcomm, this is a stronger version of stack protection. And pointer authentication codes are similar to message authentication codes but for pointers if you're familiar with that. And the idea of that is to protect data and memory in relation to a context with a secret key. So the data and memory could be the return value and the context could be the stack pointer. Or data memory could be a function pointer and the context could be a V-table. So if you take a look how pack is implemented. So at the left you can see function entry and function prologue and function epilogue without pack. And with pack the only thing that really changes is when you enter a function before actually doing anything inside it you would normally store the return value on the stack. But when doing that you would first authenticate the pointer with the context and then kind of create a signature and store it inside the pointer and then put it on the stack. And then when you leave the function you would just take back the pointer, again calculate the signature and see if these both signatures matches. And if they do then just return and if the signature is invalid you would just throw a hardware fault. So this is how it looks like for 64 bit pointers you don't really use all of the available bits. So usually use 48 bits for virtual memory which is more than enough. So you have, if you use memory tagging you have seven bits left for putting in the signature. Or if you do not use memory tagging you can use up to 15 bits for the pointer authentication code. So the basic idea of pack is to kill ROP-like code reuse attacks. You cannot simply smash this tag and create a ROP chain because every return would have an instruction verifying the signature of the return value and that means you would need to sign every single of these pointers and since you don't know the key you can't do that in advance. So you cannot modify a return value and you cannot swap two signed values on the stack unless the stake pointer is the same for both. Can we bypass it? Maybe, I don't know. But we can take a look at how that thing is implemented. So if we take a look at the ARM slides you can see that pack is basically derived from a pointer and a 64 bit context value and the key and we put all of that in an algorithm P and that gives us the pack which we store in the unused bits. So the algorithm P can either be calmer or it can be something completely custom. And the instructions, the ARM instruction kind of hide the implementation details. So if we would go for attacking pack there's two ways of attack strategies. We can either try and go straight for the cryptographic primitive, like take a look what cipher that is or how that cipher is implemented, maybe it's weak or we can go and attack the implementation. So if we go and attack the implementation we could look for signing primitives which could be like small gadgets we could jump to somehow, somehow execute to sign a value which could be either an arbitrary context signing gadgets or maybe a fixed context signing gadget. We could also look for unauthenticated code. For example, I imagine the code which sets up pack itself is probably not protected by pack because you can't sign the pointer if the key is not set up yet. Maybe that code is still accessible. We could look for something like that. We could also try to replace pointers which share the same context. It's probably not feasible for return values on this deck but maybe it's feasible for swapping pointers in a V-table or maybe you come up with your own clever idea of how to bypass that, these are just like some ideas. So I wanna make a point here that in my opinion it doesn't make much sense to try to attack the underlying cryptography on pack. So I think that if we go for attacking pack it makes much more sense to look for implementation attacks and not attacking to cryptography and next few slides are just there to explain why I think that. So if we take a look at karma which was proposed by ARM as being one of the possible ways of implementing pack, karma is a tweakable block cipher. So it takes an input, a tweak and gives you an output which kind of fits perfectly for what we want. And then I started looking at pack at karma and came up with ideas how you could maybe attack that cipher and at some point I realized that practical crypto attacks on karma if there will be any in the future will probably, that's what I think completely irrelevant to the pack security. So why is that? If we define, so just so you know the next few slides gonna bore you with some math but it's not too complex. So if we define pack as a function which takes a 120-bit input and a 120-bit key and maps it to 15-bit output or we can more realistically define it as a function which takes 96-bits input with the 120-bit key because we have 48-bit pointer because the other ones we can't use because that's where we store the signature and we most likely using the stack pointer as a context so that one will also only use 48-bit pointers, 48-bits. So then we have a pack as a construct. So then we define the attacker with following capabilities. The attacker is allowed to observe some pointer and signature pairs and I assume that you can get that through some info leaks. For example, you have some bug in the code which lets you dump a portion of the stack with a bunch of signed pointers. This is why you can observe some, not all, but you can see some. And I would also allow to have the attacker be able to slightly modify the context. And what I mean by that is I imagine a scenario where the attacker could maybe shift the stack, maybe through more nested function calls before executing the leak which will give you actually two signatures for the same pointer but with a different context. Maybe that's somewhat helpful. But so we realize that the attacker is, the cryptographic attacker is super weak. So the only other cryptographic problem there could be is collisions and for those of you who've seen my last talk they probably know I love collisions. So we have 48-bit pointer, 48-bit context and 128-bit key. We sum that up and we divide that by the 15-bit of output we get from pack which gives us two to the power of 209 possible collisions because we map so many bits to so little bits. But even if we reduce the pointers because practically probably less than 34-bit of a pointer are really used, we still get two to the power of 181 collisions which is a lot of collisions but the bad thing here is random collisions are not very useful to us unless we can predict them somehow. So let's take a look how a cryptographically secure Mac is defined. So a Mac is defined as following. Let P be a Mac with following components and those are basically Gen, Mac and Verify. So Gen just somehow generates a key. It's only here for the sake of mathematical completeness. Just assume we generate the key by randomly choosing N bits or however much bits the key needs. And Mac is just a function where we put in an N-bit message, call M and it gives us a signature T and I'm gonna say signature but in reality I mean a message authentication code. And the third function is Verify, you give it a message and a signature and it just returns true if that signature is valid for the message or false if it's not. And when cryptographers prove that something is secure they're like to play games. So I'm gonna show you my favorite game which is Mac Forge game. So the game is pretty simple. You have to the left the Game Master which is playing Mac Forge and to the right the attacker. So the game starts when the Mac Forge Game Master informs the attacker how much bits are we playing. So this is the first one to the power of N basically means hey we're having Mac Forge with I don't know 64 bit messages. So the attacker knows the size. Then the Game Master generates a key and then the attacker can choose up to key messages of N bit length and send them over to the Game Master. And the Game Master will generate signatures and send them back so then the attacker can absorb all the messages he generated and all the matching signatures. So what the attacker needs to do then is to choose another message which he did not send over yet and somehow come up with a valid signature. And if he can manage to do that he sends it over and if that's actually valid signature for the message then he wins the game otherwise he loses the game. So we say a Mac is secure if the probability that an attacker can somehow win this is negligible. So I'm gonna spare you the mathematical definition of what negligible means but like just guessing or trying means that it's still secure if that's the best attack. So as you can see is a Mac which is secure needs to withstand this but for our pack attacker we do not even have this oracle. So our attacker for pack is even weaker than that. So why do we not have this oracle? Well simple, if we allow the attacker to sign arbitrary messages, the attacker wouldn't even need to try to somehow get the key of forged message because then he could just send over all the messages, all the pointers he wants to sign, get back signed pointers and he wouldn't need to bother about breaking the crypto at all. So basically the point I'm trying to make here is that the pack attacker is weaker than a Mac attacker. So every secure Mac we know is also a secure pack but even an insecure Mac might still be sufficiently secure for pack. So secure Macs have been around for a while and thus in my opinion I think if somebody who knows what he's doing is to design a pack algorithm today it will likely be secure. So instead of going for the crypto I think we should rather go for implementation attacks instead because those will be around forever and by that I mean, well you can either see how the crypto itself is implemented but I mean especially by that is you could see how the pack is used in the actual code. Maybe you can find signing oracles, maybe you can find unauthenticated code. I think this is where we need to go if we wanna bypass pack somehow. So just to recap where we're coming from future iPhone hacks probably gonna not try to bypass KTRR and I think they will not try to patch kernel code because we can achieve pretty much all the things we want to achieve for end user jailbreak without having to patch the kernel so far. And I think people were gonna struggle a bit at least a bit when exploiting with pack because that kinda will either make some bugs unexploitable or really, really hard to exploit. Also maybe we're gonna avoid the kernel at all as Bazard has demoed the userland only jailbreaks are possible, maybe gonna recalculate what the low hanging fruits are. Maybe just go back to iBoot or look for what our thing is interesting. So that was about it. Thank you very much for your attention. If you would like to ask a question, please line up on the microphones in the room. We do not have a question from the internet. We have one question over there. Yes, please. Okay. Hi, I would like to be interested what your comment is on the statement from Zorik that basically jailbreaking is not a thing anymore because you're breaking so much security features that makes the phone basically more insecure than the former reasons of doing a jailbreak really allow for. Well, jailbreaking, I don't think jailbreaking itself nowadays makes the phone really insecure. So of course, if you patch the kernel and disable all the security features that will be less secure. But if you take a look what we have here with the unpatchable kernel, I think the main downside of being jailbroken is the fact that you cannot go to the latest software version because you want the box to be in there to have the jailbreak. So I don't really think if you have a KTR device, the jailbreak itself makes it less secure. Just the fact that you're not on the latest firmware is the insecure part of it. All right, thank you. Mike for number two, your question. Hi, good talk. Could you go back to the capabilities of the adversary, please? Yeah, so you said you can do basically two things, right? This one, yes. Yeah, I can observe some pointers and some signatures pairs. But why is this not an Oracle? Because you cannot choose your message yourself. You message yourself. And you have also an Oracle that says if the signature is valid for a chosen message. Well, yeah, but this is if you take a look at the game and this game for a secure Mac, the attacker can choose up to Q messages, send it over, like he can do whatever he wants with that messages and get a signature while the pack attacker can see a few very limited amount of messages and they're matching signature and he has little to no influence on these messages. All right, thank you. Okay, so it's a bit weaker. So yeah, that's the point, just that it's weaker. Thanks. Do we have a question from the internet? Seems not so. Okay. Just please. All right. Then I don't see anyone else being lined up and please give a lot of applause to Team Stuff for us all. Thank you. Thank you.