 If you are a macOS developer and you've updated to Xcode 13 and created a new project, you may have noticed that there's this new method in the application delegate. Application supports secure, restoreable state, which by default returns yes. In this talk, I'm going to describe the vulnerability that necessitated this change and show how this could be applied for three different types of attacks. I'm Thijs Alkumaren, I'm a security researcher at Compute Test. Compute Test is a security testing company in the Netherlands. We provide security services like pen testing, incident response, code, all the things, stuff like that. But I'm together with my colleague Dan Keper, part of the research department, which means that we don't work for customers, but instead we can basically research anything that we think is important. So we try to look for stuff that has a lot of users or where impact can be large, just to make the world a little bit safer. Other work you may have seen from us is our zero-click RCE in Zoom at PontoOwn last year. And we won PontoOwn Miami this year with five different vulnerabilities in ICS systems. There's write-ups of these on our website if you want to know more about these. But today I'm going to be talking about macOS security, which has been a bit of a specialty for me. I've been doing a lot of work with macOS over the years. It's a system I really know the best. All of these were just incidents, but macOS is really where I'm passionate for. This talk will consist of three parts. First of all, I'll talk a little bit about the macOS security model because many people don't really understand how the security model works. Many people still have incorrect assumptions about how the security works on macOS. And then I'll describe the vulnerability that I found, a process injection vulnerability. And then in the third part, I'll demonstrate how this vulnerability could be applied for escaping the sandbox, privilege escalation, and bypassing SIP. So first of all, the macOS security model. So to describe the security model, I'll first describe the Unixi security model that used to be used by macOS as well. And the basic idea behind the security model is that users are security boundaries, but processes are not. So if you look at the permissions for files, then that is determined by the owner and the group. And there's these nine different bits that determine whether the owner or group or everybody is allowed to read, write, or execute that file. Also, if you want to attach a debugger to another process, then in general, those need to be running as the same user. There's one exception. There's the root user who always has access to all files, can attach to any process. And therefore, they can basically access all of the data on the system by just whether in memory or on disk, they can always get access to it. Now, this used to be the same security model as macOS has, but this has changed. So in 2015, with the release of El Capitan, Apple introduced system integrity protection. And this is a screenshot from the WWDC talk where they introduced this. And the basic idea behind system integrity protection was, at the time, there were two things. There was to make a security boundary between any process running as root and the kernel. And secondly, to protect the operating system from being modified even by the root user. So this feature is also known as rootless sometimes internally. Many people thought that that would mean that Apple would take the root user away from normal users like on iOS where you have no root user. But this is really not what's meant by that name. The idea behind it is that root is less powerful, so that's why it's rootless. But of course, there do need to be processes that can modify the system because you need to be able to install updates. So for that, what they use are entitlements, which are basically metadata that is included when generating a code signature for an application. So to do dangerous operations like loading a kernel extension, modifying the system files, or debugging a system process, you no longer need to be a specific user. But instead, the system will check if the process trying to do that has a certain entitlement. Now, this is SIP has over the years been extended by Apple with more and more restrictions. For example, debugging any application is now also forbidden by SIP unless the application specifically allows that. And there's this feature called data vaults. I have an example of that here. So Apple considers your email database, but also your iChat history or your Safari browsing history. That's very sensitive. So Apple doesn't want any process to be able to read that. So this is placed into what's known as the data vault. And as you can see, you cannot just list the contents of that directory. But even if you use sudo, you cannot list the contents of that directory. So the only processes that are allowed to access that directory are those that have a special entitlement. So your mail client, of course, needs to be able to access that directory. So it has this entitlement, com.apple.rootlist.storage.mail. And that gives it access to the mail data vault, which is that location that I showed before. So mail can access the files in there with any other process cannot. Of course, this new security model also introduces new types of vulnerabilities, or vulnerabilities become suddenly more important. One of those is what's known as process injection. And that's basically the ability for one process to execute code or to add code that the system thinks is another process. So the system thinks it's process B with the entitlement of process B, but actually process A specified the code that's being run. And when Apple introduced SIP on the right, there's this one slide of that presentation. They already disabled a lot of things that could be used for that, like TOS for PID or Dtrace, dynamic library environment variables, stuff like that. And also, they added the hardened runtime to make this also possible for third-party applications. If they opt into that hardened runtime, then it's not now also harder to inject into that process. But MacOS is old, it's large, it's established. So there's a lot of code that was written before this change in the security model. And it's really hard to really reevaluate the entire system when you make a change like this. Now, process injection is a vulnerability type that's found often in an incidental way. So you find a third-party application that it's missing the hardened runtime or something like that, or it has an exception. And that kind of impacts, like you could, for example, if an application has access to your webcam and you are more installed on that machine, then if you can inject into that other application, you can use that application's permission to access the webcam without the user being asked to give permission for that. And these attacks also often work by downgrading an application to an older version that did not have the hardened runtime. But of course, that's incidental process injection vulnerabilities. What's way more fun, of course, is if you have a process injection vulnerability that applies everywhere. So we get to this vulnerability CVE-2021-3873, which was a process injection vulnerability in AppKit, which therefore affected all of the applications that are developed using AppKit, which is basically the framework that you use for creating desktop applications on macOS. And this vulnerability was in a feature that's called SavedState, or also PersistentUI internally. And what this feature is used for is, for example, if you shut down your computer, it asks if you want to restore your open windows the next time you log in. And when it restores those windows, they will have the same locations. And if you have an unsafe document and you shut down and then the windows are recovered, then that document should still be there if they correctly implement this. Now, most of this works out of the box. There's nothing the application needs to do to opt in to this, but for document-based applications, it can be necessary that it stores some extra data about the document into that SavedState. So it can be extended, but by default, it already affects our works and all applications. Now, the way this works is that it stores a couple of files into a directory. I mean, your library's SavedApplicationState directory. There's two files important here for this vulnerability. There's the windows.plist file. This is basically a list of all of the windows that the application had open with an encryption key for each window. And there's the data.data file, which is a custom format. As far as I know, it's not used anywhere else on macOS, but it's also a list of records. And each entry in that list corresponds to an entry from the Windows file, which contains an encrypted serialized object. Now, the encryption here, it's ASCBC. I have no idea why it's encrypted, because the key and the file are right next to each other. There's no different permissions for the files. So anything that can read that key can also read the file. So I don't understand why it's encrypted. There's also no integrity check on it at all. And the vulnerability here is that it was a serialized object using an insecure serializer, which means that it was possible to exploit it. Serialization vulnerabilities are very well known for languages like C-sharp or Java. They also affect Python and Ruby and a lot of other languages. But there hasn't been much published about it for macOS. Apple has a serialization format called NSCoding. And they also realized that these same serialization type vulnerabilities could affect it. So they introduced NSCure coding way back in 2012. So quite a long time ago already. And many uses of NSCoding where security is important now use that NSCure coding for variant. So it's often used for communication between processes. And it's exclusively the secure coding. And it's even apparently used within iMessage. If you send a message to another user, then that message is a serialized object. So you can see that the security of the secure version is very important. And to demonstrate the difference between the insecure and the secure version, in the insecure version, you first create the object. And then you can check, is this the correct type of what I expect? But for the secure version, you decode the object only if it is of a specific class. So the reason why this first version is insecure is that by the time you create it, it already exists. The constructor or something like that may have been called or the destructor might have been doing something. So that could lead to availability if those objects do stuff just by existing. So how would an attack like that work? So you could create a new saved state, write it to the directory with the new encryption key or something like that, and then ask the system to open the application. Then the application will automatically deserialize that object because it sees there's a saved state that it should be restoring. And then at that point you are now deserializing that object in another application, which could mean code execution in another application. But of course, now is the challenge. What malicious object can we write there? So I spent some time looking at prior work for this. One of the famous projects for generating serialized objects for Java is Y-socereal. And as for C-sharp, there's Y-socereal.net. But Y-socereal, Objective-C, does not exist. I also spent some time digging through some Google Project Sierra write-ups about serialization vulnerabilities, but those were targeting a secure version using specific vulnerabilities that have long since been fixed. So they were also not very useful for what I was trying to do. So I had to really come up with my own chain of objects to get that code execution. So how did I look for that? So I disassembled a lot of the objects that I could use. I loaded appkit into a decompiler and looked through all of the init with coder functions, which are the methods that are being called when a object is deserialized. And I noticed that many of those classes do not support secure coding. Also, often those classes are not really intended to be sent to another process. And they were also not very interesting from an attacking point of view because they were not doing very much. They were just recursively decoding some instance variables, but that didn't really help me. I wanted to do more than just decoding an object. But eventually I found a couple of objects that I could use. First of all, there's the NSRule editor. This is basically the widget that you have in mail if you create a mail rule to configure that. When this object is deserialized, it takes two objects from the archive, an owner and a keypath, and then it creates a binding using that keypath to the owner. Now, bindings are a sort of reactive programming technique in macOS, which means that you can directly connect a model to a view without having to create a controller to do all of the boilerplate work of updating the view or updating a model. And one interesting thing about creating a binding is that you can sort of specify a keypath. So that can be if your model is a person, then you can bind to the person's name or the person's child's name, something like that. You can sort of go through nested properties with a keypath. And those are intended to be used for properties, but there's basically no check that you're trying to actually bind to a property. You can also specify a method with no arguments, and that will also be possible to bind to. And at the moment that you create that binding, it will invoke that method. So you'll call that method to get an initial value for the binding. So by just deserializing this object, at this point I could call new methods, but only if they had no arguments. But it's a good first step. Then it's a second step. There's the NSKistonImageWrap. This takes two objects from the archive, a draw object and a draw method. And this is basically a selector. A selector is something like a function pointer, but for objective scene method. So it's the name of a method. And then in the draw function of this class, it would call that method on the object that it deserialized with one argument, namely the image wrap itself. So by combining this with the previous step, where we could call a method with no arguments, we can call the draw method on this object. And at this point we can now call methods even if they have arguments, but we don't have any control yet over those arguments. Now, for the sake of time, I have to skip a couple of steps here. Also for disclosure reasons, there's still some in-fix stuff in here. But I have two steps here. Now I can call zero argument methods. I could extend that to call arbitrary methods. But these are still only on objects that I can deserialize. And then I use the trick to also create objects that are not deserializable, objects that do not implement that protocol. And then I use a similar trick with a binding to call zero argument methods on those objects. Another trick to call arbitrary methods on those objects. And at this point I can... I also have control of the arguments here. So at this point I can basically call any objective C method that I want with arguments that I specify. So one thing I could do here is just evaluate Apple Scripts within the process. So this is already very powerful. So for example, if I was attacking mail, then I could use some Apple Scripts to copy some files from the data vault to a location where any process can read it or just read the contents and send it off to somewhere else. I can also execute a shell script or something like that from Apple Scripts. So at this point I was now executing code within another process, but it was limited to Apple Scripts. And for the attacks that I wanted to do later on, for two of them this was enough, but for one of them I really needed something that was equivalent to executing native code. So this Apple Script is very limited. There's no... Yeah, there's restriction of what you can do. So I had to go one step further. But as I mentioned, there's this hardened runtime that is meant to make attacks like this harder. So I could not create any memory pages that were JIT mapped, so not any executable and writeable pages. I could not create inside memory. I could not load any libraries that were not signed by either Apple or the same developer. This is library validation. I could not use dynamically linker environment variables, which is not relevant here. So it was really tricky to figure out how can I execute something that's equivalent to native code within all of these restrictions? And then I noticed that I could load Python framework. So Python was included in macOS at that time, and it was signed by Apple, so I can just load it into any process. And if you import the C types module, then you can evaluate basically something that's equivalent to native code. You can call C functions. You can create structures, stuff like that. And also, C types does not conflict with any of the restrictions of the hardened runtime. So many programming languages, you can create bindings, but they often need to be compiled and it doesn't work due to the hardened runtime. But now I had this challenge. I could call objective C methods, but I wanted to evaluate Python, and Python doesn't have an objective C API that I could call into. So I needed some intermediate step to bridge these two together. So I found another framework I could use, and this is the AppleScript objective C bridge. Now, this is basically AppleScript combined with access to the objective C runtime. So AppleScript that can basically call objective C methods, create objective C objects. And one interesting thing about this is that you can load new scripts into a process, but even when you have that hardened runtime and library validation, the scripts can be loaded from a bundle that is not signed. So I could put some new scripts into a new bundle, load only the scripts from that bundle, and they would be added to the objective C runtime. Now, with this I could create objective C objects, call methods, all of this I could already do before, but what is new here is that I could call C functions from objective C or the AppleScript objective C bridge. But one annoying restriction of this bridge is that I could not create non-object pointers, so pointers to anything else, then an objective C object. I could not create structs, and I could not work with any C strings that's just not supported by that language. And Python, in order to basically call anything, you need to pass it either a file path or the Python code that you want to evaluate. So you really need a character pointer to include either the path or some string that you want to evaluate. But then finally, I just break through something that worked. I could call py main with zero and nil, and that's a valid object in the AppleScript of F2C. And when you do this, it basically works as if you start a Python rappel. So it reads a script from standard input and then acts basically just like Python would if you launched it on the command line. So with this, I could then import C types and evaluate codes that's equivalent to native code. Now, if you think that AppleScript or Objective C, many people say it's verbose, but if you don't think it's verbose enough, then you really should work with the AppleScript Objective C bridge. I have an example here. So this is how you would call a method in the AppleScript Objective C bridge. So you really, you need to use that apostrophe to call new methods, and you'll get these really weird sentences that sort of look like they're supposed to be readable English, but it's really incomprehensible as a language. So to summarize the steps needed to get that code execution, we can evaluate AppleScript with the AppleScript Objective C bridge. Then we can evaluate Python. We can import the C types module. At this point, you can just evaluate code that's just equivalent to native code despite all of the restrictions of the hardened runtime and stuff like that. Now, how could we exploit this? I wanted to really make sure that I had all of the impact that this could have described in our report to Apple, so I tried to look for all of the different ways that this vulnerability could be applied. So first of all, to escape the Mac application sandbox. Now, what you see here is an open panel, and this may look like a very boring window that you see 100 times a day if you use macOS, but technically it's actually quite complicated because if you are in a sandboxed application, then that application doesn't know about all of your files. It cannot list all the files that you have because it's sandboxed. But if the user wants to open a file, then it would be really annoying if they cannot see their files in the application they are using. So Apple created this technology for that. While the window itself is part of the application, the actual contents of the window are being drawn by a different process. It basically works like an iframe on a website, so it's a different process drawing those contents. And that's an open and safe panel service. So this service does have access to your files, it's not sandboxed, and when you select a file in this panel and click OK, then the application will get temporary access to that file that it can then use to read or write that file. And one thing I noticed about this open panel if it was being opened is that this open and panel service was loading its saved state from the same directory as the application itself. So by creating a new malicious serialized object into that saved state, then triggering the opening of such a panel, it was possible to execute code within that open and safe panel service. And that service was not sandboxed, so at that point I have escaped sandboxed. I'm not completely sure why it was sharing that saved state, but it might have something to do with the user resizes that window and then they shut down their Mac, then it might need to restore the state of that window complete with the state of the panel and that might need to be separated by application. Something like that, I'm not entirely sure. And this was fixed earlier than the rest by Apple, this was fixed in 11.3 by no longer sharing that same saved state directory. The next step was to elevate privileges to root. And for this I basically used the same technique already found by Elias Morat written in all of the logic books for the WinWrited. I looked for an application with the entitlements com.apple.private.authorization services with a value for system install Apple software. And what this entitlement means is that this application is allowed to install packages that are signed by Apple without any authorization by the user. The user doesn't even see that something is happening. So this is used for example by the install command line developer tools application which can update certain parts of the system. And then this can be combined with this specific package, the macOS public beta access utility package. This package when it's installed to a disk, it will run a post install script so after the installation is finished it will run a script as root from the disk that you installed it to. But there's no check that you actually install it to a macOS disk. You can just install it to any disk, a disk image, a RAM disk, something like that. And it will still run that same path, the same command on that disk. So because mounting a disk does not require any root privileges, you can mount a disk, put a file on the same path, perform the installation of this package and then in the post install script it will invoke that script as root and therefore elevate privileges to root. And then finally to bypass SIP or the SIP file system restrictions part. Because I wanted to make sure that we had all of the possible attack servers mapped out, I looked at all of the applications I could find with what kind of entitlements they might have. So not just everything included in macOS, but they also looked at the better installation disk image. And there I found this very interesting application, the macOS update assistance application. And it turns out this has an entitlement com.apple.rootlist.install.heritable. And what this means is that it basically allowed to write to any SIP protected location or read from any SIP protected location on disk. And as a bonus, it's also heritable. So any sub-processes that start will also have the same permission, which is very easy because then you can just spawn reverse shell instead of having to work in process. And what can we do with the SIP bypass like this? So as mentioned earlier, you can read the mail database of a user. You can read the messages database of Safari history, stuff like that. We can also grant our own application permission to use the webcam. So we can just add ourselves to the database and then we can use the webcam without any permission by the user. We can also persist very well into the system because we could write ourselves to a location that is SIP protected. So for example, you could also remove the malware removal tool Apple uses to delete malware. Maybe you've even replaced it with our own malware. And at that point, Apple could still delete this from the system but any other AV vendor would not be able to delete it because it was SIP protected. And then finally, we can also load a kernel extension without user approval. So normally loading a kernel extension creates a prompt like that and then the user still needs to click a couple of times on the security preferences to really make sure that they want to load that kernel extension but we can just pre-approve any kernel extension to be loaded. Now, that doesn't directly give kernel code execution because you still need a validly signed kernel extension and because Apple is deprecating kernel extensions, getting such a certificate is pretty much impossible right now but because you can approve just any kernel extension we could look at all of them try to find one with the vulnerability we could abuse and then we could also get kernel code execution if we wanted to. But with the SIP file system bypass we already have a lot of access to all of the files on the system so we have already compromised a lot of the data there and kernel code execution. It would be a nice bonus but would not give that much extra access to the system. Now, here's a video to demonstrate this attack. This is on macOS 11.2.3, I think. First of all, it demonstrates that the sandbox application is actually sandboxed and then it will do the three steps in order here. The privilege escalation step is a little bit slower because it needs to create that RAM disk, do the installation to that. So as I'm not trying to be subtle, you can see the disk image appear on the desktop there. And then the next step should be a little bit faster. As you can see here, we now have a root shell, but not only that, we can also go to the System Policy Configuration Directory which is a location where the approved kernel extensions are stored in a database which is a very sensitive location normally protected by SIP and as we can demonstrate here, we could write a new file into this directory. Thank you. Now, about the fixes. So, with the release of Monterey, that new method that I showed at the start was added and applications can now indicate that they only accept secure, serialized objects. So Apple enabled this for all of their own applications, so the exploits previously were no longer possible, but third-party applications may use that ability to store their own objects in that saved state. So, therefore, that method is needed for applications so they can opt out if they don't support that yet. I'm not completely clear if it's still exploitable if the applications don't store any objects. I still need to look into that a bit more. This was reported to Apple on December 4th, 2020, and then they fixed the sandbox escape earlier than rest in 11.3 in April, and then they fixed it completely with the release of MacOS Monterey, which is in October 2021. Now, originally, I thought they did not backport this fix to the two older MacOS versions. Generally, Apple keeps supporting three versions, so the previous one and the one before. Within the release notes, it was only mentioned for MacOS Monterey when it was released. So I thought they didn't backport it, but then I was working on these slides and two weeks ago or something like that, I noticed that Apple had updated the release notes of the Catalina security updates. That was at the same time as Monterey. They updated this back in May, so half a year later, and they also now list this vulnerability as being fixed. And a week before I gave this for now, Apple Product Security emailed me spontaneously with ARC, you're going to do a talk at DevCon. Would you be willing to tell us what you're going to talk about? Maybe we can provide some feedback. So I just asked them, well, is it fixed in Catalina and Big Sur? Because it was not in the release notes for Big Sur, which is weird because it was in Catalina, which is the older version. And then yesterday at about 8 a.m., they got back to me and told me, yeah, it's supposed to be fixed in all of these. If you can still reproduce it, then please let us know. But that really wasn't enough time for me to look into it a bit more if it is actually fixed. So I still need to look a bit more into whether it was actually fixed in those older versions. So to conclude, Mac OSS security boundary between processes, not only between users, and process injection vulnerabilities are now very important because it can break those boundaries between processes and allow dangerous entitlements to be used by other processes. The CVE-2021-30-87-3 was a process injection vulnerability affecting all app kits-based applications, therefore allowing all powerful entitlements to be abused by malware. We demonstrated how it could be applied to escape the sandbox, elevate privileges to route, and to bypass the falseness and restrictions of SIP that was fixed in October last year. Some takeaways here. Mac OSS security keeps adding more and more defensive layers, but adding new layers to a established system is quite difficult. Code-written 10 years ago or more than that can suddenly become a tech service that nobody has thought about when it was being written. And also I think an interesting point here is that the effort of attackers may not really increase if you add more layers, if you can just use the same bug for bypassing multiple layers in the same way. So to make sure that security layers work, you really need to make sure that they actually are different enough that you cannot use the same technique. I have some references here about work that I used, the writer from alias for example, and some other resources about serialized objects. And we will publish a full writer with a lot more of the technical details that I had to skip. I will do this over the next couple of days at most. If you want to know more, follow me on Twitter or our research department. And I'll take any questions if you have them. Okay, so the question is, is there a remaining risk for enterprises and users? Well, I don't really know if older versions of macOS are vulnerable, as I said at the end. So updating to the latest version would be a good recommendation and keeping updated. Yeah, if you're developing an application yourself, then being aware of this type of vulnerability would also be important, but otherwise users cannot really do that much about this. There's not really any tools that can protect against something this, yeah, against this type of attack.