 Aloha and welcome to Bundles of Joy, a talk about breaking macOS via subverted application bundles. My name is Patrick Wardle. I am the creator of the Mac Security Tool Suite and Security website, Objective-C. Also the organizer of the Mac Security Conference, Objective-By-the-Sea, and also the author of the Art of MacMower Analysis book. So today we're gonna be talking about an interesting flaw that affected all recent versions of macOS. We're gonna start by looking at various anti-infection mechanisms that the flaw was able to sidestep. We'll then dive into the flaw, looking at its root cause analysis. We'll then talk about the discovery of it being exploited as zero-day in the wild to distribute malware. We'll then dive into protections and detection mechanisms that we deployed while awaiting Apple's patch. And then finally we'll wrap up by analyzing Apple's patch to see how they ultimately addressed, fixed, patched the flaw. So first, some background. The main way that Mac users are infected, come infected with malware, is via user-assisted techniques. These are methods that require user-assistant user interactions. I'm sure many of us here are familiar with these, but a brief review. We've probably all heard about malicious websites that display pop-ups, for example, claiming your Flash Player is out of date. If you download and run what you believe is perhaps a required Flash update, your system may become infected with malware. Adversaries also do things like poison search results, infect popular websites that users may browse to in order to distribute malware. Finally, hackers are very fond of pirating applications, but then injecting malicious code, trojanizing these applications. So when users download and run them, they will be infected with malware. But really the main takeaway that kind of ties all these approaches together is there is explicit user interaction required. And in some sense, the user is ultimately infecting themselves. Now, Macs become ever more popular, ever more prevalent, so do these attacks, right? Just more Mac, malware, and adware than ever. And Apple rightfully realized that, as we just mentioned, the majority of ways that Macs were getting infected was by user-assisted, user interaction-based malware attacks and infection vectors. They really decided, we need to do something to protect the users from them themselves. And I'm a critic of Apple, but in this case, yeah, I think that was definitely the right approach. So we're basically gonna look at three technologies, three anti-infection mechanisms, file quarantine, gatekeeper, and notarization requirements. As I noted, these are all aimed at protecting the user from infecting themselves. The goal is, if the user is tricked or coursed into running something, the operating system will first intercept that launch, that application launch, that process execution, and examine it to make sure that it's not my malware. Now, we first need to talk about the quarantine attribute. Quarantine attribute is something that is added to most, essentially all downloaded items, either by the application that's downloading it, for example, the browser, or the operating system. And it is an extended attribute that basically tells the operating system, hey, this item is from the internet. When the user then goes to launch the item, for example, launch the application that they just downloaded, the operating system will see if the quarantine attribute has been added. And if so, it will then perform a variety of checks, gatekeeper, notarization, file quarantine checks on that item. So the quarantine attribute is kind of the catalyst for those checks. You can examine if a file has a quarantine attribute via the x adder command. As you can see on the slide, downloaded some malware from the internet. And as expected, the browser and the operating system slapped that quarantine attribute on the item. So if we were to go launch this application, the operating system would perform all its anti-infection mechanisms, its checks. So the first anti-infection mechanism that Apple introduced was all the way back in 2007. Wait, no, 2007. Yeah, way, way back. And this technology is named File Quarantine. And in a nutshell, it basically will display a prompt to the user saying two things. First, hey, the item you're about to launch is from the internet. And two, it is an executable application. This is important because a large number of malware will attempt to masquerade as a benign file types. On the slide we have an example. This is Wintail, that was a distributed as a malicious application, but used a application icon to masquerade as a PowerPoint document. The idea from the malware author's point of view is the users might be tricked into launching this because they thought it was a benign PowerPoint application. However, File Quarantine would jump in the way and say, hey, wait a minute. User, just to make you aware of this, just to make sure you know that this is actually an application and if you run it, you might infect yourself. So a good warning, but the problem, as is the case with most warnings, users would simply click Allow Open, thus still infecting themselves. So Apple had to take the next step. And that next step was Gatekeeper, which was introduced in 2012. In a nutshell, Gatekeeper will block unsigned applications from running. Basically when the user launches an item that they've downloaded from the internet, the operating system will intercept that and Gatekeeper will check to see if it's validly signed. If it's not, it will block that. That was a good approach because at the time the majority of Mac malware was unsigned. Of course, the shortcoming was that malware authors simply began signing their malware. It's pretty easy to fraudulently obtain or steal a legitimate code signing developer ID, which would then allow you to sign your malware, which would then allow you to bypass or sidestep Gatekeeper. So Apple had to respond yet again, which they did in 2019 with the introduction of notarization. Notarization will block any application that has not been explicitly verified, scanned and approved by Apple proper. So on the slide, we see kind of a conceptual overview. Imagine your developer creating a application, you compile it, you now have to submit that application to Apple where they will scan and verify the application. They don't detect any malicious code, they will then give it their stamp of approval, they will then notarize this. At runtime then, when you distribute this now notarized application, it will be allowed to run. The idea is malware authors will, A, either not submit their applications to Apple for verification, or if they do, Apple will detect that they contain malicious code and thus not notarize them. This means that even then, if the malware authors successfully trick the users into attempting to run their malicious code, the operating system will be like, oh, wait a minute, this is not notarized, I will block it. And in reality, this does work very well. As you can see on the slide, some hackers slid into my DMs, bemoaning the fact that notarization had essentially ruined their entire operation. Let's now talk about an interesting flaw though that was able to very neatly sidestep, fully bypass all of these anti-infection mechanisms. And as it was a logic flaw, did so in a 100% reliable manner. First I wanna give credit to Cedric Owens who uncovered this vulnerability. He wasn't 100% sure on the root cause, so pinned me describing what he had found. We have a nice proof of concept on the slide that demonstrates the power of the vulnerability and aligns to what Cedric observed, which was we can download a malicious application from the internet that is not signed, not notarized, we can masquerade as, for example, a resume, a PDF document, or really anything else. And when launched, neither file quarantine nor gatekeeper, nor the operating systems notarization checks appear to come into play. And as Cedric notes, no prompts from the operating system at all. This is a very powerful vulnerability because what it means is in theory, malware authors could go back to their old tricks of basically infecting users across the globe and not have to worry about any of macOS's recent anti-infection mechanisms. So let's take a look at what's going on. So we have this proof of concept and the first thing I wanted to check was, what's the quarantine attribute being correctly set? Because as we mentioned before, the quarantine attribute is the indicator that tells the operating system to perform its various anti-infection checks. Quarantine, you know, file quarantine, notarization, and a gatekeeper. Well, as we can see, the proof of concept is not signed, which also means it's not notarized, but indeed it does have the quarantine attribute set. We can confirm that via the X adder command. So this immediately shows us it's not an issue with the quarantine attribute being mis-set. But this is almost more intriguing, right? You have an unsigned application that can bypass file quarantine, gatekeeper, and notarization requirements. How? That's insane. So, closer look. What's going on? Well, if we look at the contents of the application, we notice two very interesting things. And I'll point these out because you might not be super familiar with application bundles. The first thing is if we look at an application, .app, which is really a special directory structure, we see that it only contains three things. A contents directory, a macOS subdirectory in that contents directory, and then a file named POC in that macOS subdirectory. Now, if you're familiar with normal application bundles, you'll be like, wait a minute, where is the info.plist file? The info.plist file is a metadata file that describes information about the application. And it is always present in normal applications. I thought it was required, but apparently not. The other interesting thing about this proof of concept application is that the main executable component named POC was not a mock-o-executable, which is the standard executable file format on macOS, but rather a POSIX shell script, a bash script. Now, rather interestingly, there is a popular developer script on GitHub that will package up applications in exactly this manner. The idea is if you have a script that you want to distribute to mac users, if you package it up as an application, it's way easier to both distribute and for users to run. They can just double click on it and the operating system will take care of it. The sad or laughable thing about all of this is this Appify developer script would actually package up applications in this manner, which inadvertently would trigger this same flaw. So you're looking for bugs in macOS. Sometimes all you have to do is use open-source developer packaging tools. Insane. Okay, so we have this bare-boned script-based application. No info.plist file and its main executable component is a script. And we'll see these are both prerequisites for triggering the flaw. As we also saw in that proof of concept, when we download and run it, there are no prompts, as there should be because the quarantine attribute has been set and this malicious proof of concept application is unsigned from the internet, non-notarized. So there's some flaw in the operating system. My interest was piqued. I wanted to figure out exactly what was going on. Where was the flaw? The problem though was that when you launch an application, when you double-click an application, there is no less than half a dozen applications, system daemons and the kernel, which all get involved with parsing, launching, classifying the application. It's incredible. I gave a talk about this at Shmukan a while back, talking about another gatekeeper flaw, but you can see there is a myriad of apps and daemons and frameworks that are all working together. And again, this is problematic because the flaw is somewhere in here, but this is a lot going on. Where do we even begin? So my idea was I'm gonna start by looking at log messages to see if there is some interesting log message that could point me at least towards the right application, daemon, framework or kernel code where this vulnerability might lie. And what I decided to do was launch three applications and basically diff their log messages to hopefully point me in the right direction of this flaw. So the three apps were all from the internet, all unsigned. The first one was a standard application, meaning it's executable was a mock-o executable. It also had the common info.plist file in its application bundle. Second application was a script-based application. So its executable component was a bash script, but it still had that info.plist file. And then finally, we had our proof of concept which is script-based, but is also missing the info.plist file. Now, before we can look at the log messages, we have to enable private logging. Recent versions of macOS suppress a lot of information from the logs, which is not helpful when we are digging into the internals of the operating system in an attempt to find a flaw. Long story short, we can install a profile which turns on private logging, post the link in the slides if you're interested in this. But once this is installed, all data will be logged, which is great. So now let's run the three apps and basically diff their log output. Starting with the standard application, the mock-o-based application that contains the info.plist file. Two things pop out. First and foremost, we can quickly identify that the sys-policy-d binary, the sys-policy daemon, is the component of the operating system that is ultimately responsible for evaluating and classifying applications, binaries from the internet, ultimately saying, should they be allowed or not? It is the arbiter. So we can assume, and as we'll see correctly, we assume that this binary is where the logic flaw resided. There's a lot of interesting log messages here. I've highlighted what I think is the most indicative and that is the results of the GK or gatekeeper check. And we can see there's a variety of numbers, the path of the item. But interestingly, at the bottom, it says GKEval was allowed, zero, false, show prompt one. And then a log message saying the prompt was shown. This is what we expected as this is an unsigned application from the internet. So the log messages correspond to what we see. That is, a prompt being shown to the user saying this application is not allowed. We execute the second application. This is the script-based application with the info.pless file still. We see almost the exact same log messages. However, there is an addition of a script evaluation log message, which indicates there is another code path to handle applications that contain a script as their executable component. And we'll see this is important as well. Finally, we execute our proof of concept, the bare-bone script-based application without the info.pless file. You see it goes down the same script-based evaluation code path. And also the scan results are printed. Interestingly though, there is no messages about the app being blocked nor a prompt being shown, which is also what we saw when we launched the proof of concept, it was not blocked. There was no alerts, no prompts. So now let's kind of diff these two, three log messages and really point out the very subtle but very indicative differences. So the only differences are actually in the scan results, specifically in the GKEvaluateScanResult message. For the applications that contain the info.pless file, we can see a evaluation result of zero. Whereas for our bare-bone script-based application that did not have the info.pless file, we can see that the scan result was a two. Also, as we can see in the log message, the system identified it as not a bundle. So to summarize, a evaluation type of zero will result in a prompt in the application being blocked, whereas an evaluation type of two will be allowed with no prompts. Interesting. So now let's look into both how and why this type two is returned and ultimately what it means. I just mentioned it actually means that the application will be allowed, at least we saw that through our experiments. So now let's look at the code to confirm that this is really the case. So we're gonna reverse engineer SysPolicyD. This is the daemon that's responsible for making the decisions about whether an application should be allowed or blocked. If we reverse engineer the EvaluateScanResult method, we can see that it explicitly checks the evaluation type. And if the evaluation type is set to two, it does two things. First, it invokes a setAllow method to set something to be allowed. This is a flag saying, yeah, this item is allowed. And then it returns, skipping all the logic that would present the prompt to user and block the application. Now we can see this in the static analysis of the binary, right, in the disassembly, but we can also confirm this in a debugger. So I was debugging the SysPolicyD daemon, set some breakpoints and we can see that after this code has executed on our proof of concept application, which is allowed, we can print out the value of the allowed instance variable and see it set to true. We can also print out the value of the would prompt flag and see that it's no, which means as we saw our application is allowed with no prompts. So we've confirmed that what we saw experimentally is realized in code, but I still wanted to know why was this evaluation of type two assigned to our proof of concept? Clearly incorrectly, so. So where does it come from? What returns it? Well, if we look back in the code still in SysPolicyD, we see a method named DetermineGateKeeperEvaluationType4Torget. And there are various methods that are called upon the application bundle that's about to be launched. For example, our proof of concept. So first is a method, isUserApproved and since we're not yet approved, code executes into this if statement. So we continue within it. There's then another method that's called, which is isScript. Since our proof of concept application is a script-based application, this method returns true, meaning we then go again into the next code block within that if statement. We then see two things happen. First we see the R15 register set to the evaluation type of two. Okay, cool, this is what we're looking for. And then we see a third method called to a method called isBundled. And if that returns false, it exits. Now, as you can see in the debugger prompt, this method returns no or false for our proof of concept application, which means we're going to jump to that leave label. If we look at what that leave label does, it simply moves the R15 register into the RAX register and then returns that. So now we understand where that evaluation type two is being set. And it looks like it's being returned because of our application not being classified as a bundle. Which is strange, but let's look into that a little deeper. First we take a peek at this isBundled method. All it does is returns the isBundled instance variable flag. So that's not really that helpful. But what we can do is we can look back in the code to figure out where this instance variable, where this flag is set. So we find that within a method named evaluateCodeForUser. And specifically what it does is it calls an unnamed subroutine passing in the path of the application that's about to be launched. For example, our proof of concept application. And then the return value from that unnamed subroutine is passed to the setIsBundled method, which sets or updates the isBundled instance variable flag. So obviously we're interested in that unnamed subroutine because that is the one that is ultimately classifying the item as a bundle or not, which will then determine if the evaluation type is set to two or not. So it turns out this unnamed subroutine is fairly straightforward. It's, as I mentioned, attempting to determine if something is either a bundle or not. And as we can see on the slide, the way it does this is looking for an info.plist file. Now, as I said about 20 times already, our proof of concept application is missing or does not have this info.plist file. The application is still allowed to run, even though it doesn't have this file. However, the classification logic here thinks that that is indicative of a bundle. So if an item does not have an info.plist file, it is not classified as a bundle, which we saw was problematic. We can confirm this in a debugger by stepping over this code and then looking at the value of the is bundled instance variable, the flag. And we can see that, in fact, it is set to no for false. So the system has basically said, you don't have an info.plist file, you are not a bundle. And this is, as I mentioned, problematic because if you don't have an info.plist file and you have an executable that is a script, you will be classified as not a bundle. Your evaluation type will be set to two, which will then, as we saw, skip all the logic that deals with prompting and blocking the application. So we have just neatly sidestepped gatekeeper, notarization requirements, and file quantity. Now, it's a pretty brief overview of, you know, reverse engineering, SysPolicyD. If you're interested in more of the details of that reverse engineering effort, check out the detailed blog post that I posted on this slide. All right, so now we know the cause of the vulnerability. Just to iterate, script-based application with no info.plist file will get misclassified as not being a bundle and will be allowed to run bypassing all of Apple's anti-infection mechanisms. Sweet. So next up was me thinking, hey, like, is it possible that, you know, attackers have independently found this same vulnerability and are actively exploiting it in the wild? So the search was pretty simple. Basically, we looked for an application that does not have an info.plist file whose executable component is a script. And I pinged my former colleagues at Jamf and asked them to, you know, poke around and see if they could uncover any applications that matched this search criteria. And they actually came back and said, hey, we have an application that seems to match what you requested. So they sent me, kindly, the candidate application, an application named 1302.app. As we can see on the slide, if we look at its application bundle contents, we can see that indeed it is missing an info.plist file. And moreover, its executable content is a script. Moreover, it is also unsigned and unnotarized. So this seems to be a very promising candidate. Popped into a virtual machine and executed this, even though it had the file quarantine bit set and, you know, gatekeeper notarization, everything else was enabled, as it is by default on macOS. The application was allowed to run without any prompts. And as we can see in the output from the process monitor, not only was allowed to execute, it was able to reach out and download and install its second stage payload, which installed a bunch of malware and adware on the infected machine. Yikes, ping jam, jamf, and we were able to uncover the initial infection vector. Turns out attackers had targeted popular Google search queries and poisoned the results and also infected sites that would show up in these results to serve up malware. So for example, if you Googled Alexa and Disney, clicked on the second link, it would take you to a site that would serve up an application that exploited this vulnerability. Jamf published a lot more information on this. So if you're interested, check out their post on that. But again, takeaway here is that this application exploited the same flaw. So if the user clicked and launched it, none of Apple's anti-infection mechanisms will even come into play. That sucks. So while waiting a patch from Apple, I thought it'd be interesting to dig into methods of protecting Mac users. And my idea was pretty simple. First, the observation is none of these applications that are exploiting this vulnerability are gonna be notarized. So why don't I simply block, detect and block the execution of any downloaded code that has not been notarized? Again, while waiting for an official patch from Apple. So I thought I could do this in basically four steps. First, detect whenever a new process was launched. Secondly, once I detected this process with launch, classify it as coming from the internet and being launched from the user. This was important because I wanted local items to be able to run. And also if there was something that was already installed downloading updates, I didn't wanna get in the way. So I basically said I only wanna focus on applications that are from the internet that the user has launched. And then because macOS has this flaw and we can't rely on its anti-infection checks and its notarization logic, can we then explicitly check if that item is notarized, meaning it's been scanned and approved by Apple, which this malware obviously won't be. And if it's not notarized, simply a block it. Turns out this was actually pretty easy to do. So first we can leverage Apple's endpoint security framework, the ESF, and this is a really powerful user mode framework that allows us to register for operating system events such as process launches. So here's a snippet of the code on the slide. We can see we're registering a new endpoint security client and we're telling it we're interested in the event off exec. The off exec event tells the operating system, hey, please invoke my callback anytime a process is about to be launched and I will tell you if it's authorized or not. So it allows you to be the arbiter. I've blogged more about the endpoint security framework, posted a link on the slide if you are interested. Okay, so now we have a callback that's gonna be invoked by the operating system every time a new process is launched. So the first thing we want to do is we want to check if this is an item, for example, on application that the user is launched from the internet. And there's a variety of ways to do this, but the easiest way is simply to check its app translocation status. App translocation is another security mechanism built into Mac OS that was in direct response to research I published and presented at DEF CON 15, which involved dialib hijack attacks. The idea is when the user downloads something from the internet and launches it, Apple takes just the application bundle, copies it to a randomized read-only share mount and executes it from there. So no external libraries can be injected or hijacked into it. It's a pretty good security mechanism. So what we can do, though, is when an application is launched we can query and see, hey, was it translocated? And if the answer is yes, we know A, it's from the internet and B, it was launched from the user. Cool, which is exactly what you wanna know. Unfortunately, there's no public APIs to do this, but there's very powerful private API. SecTranslate is translocated URL that you just invoke it with a pass and it will give you a result whether that item is translocated or not. So we can leverage that. Perfect for our needs. And then finally, we need to see if this user launched application from the internet is notarized or not. Apple provides a public API to do it, the SecStaticCodeCheckValidity API. You can invoke this API with a requirement. So what we do is we initialize a notarization requirement and then make this API call and it will set a flag whether the item we're examining is notarized or not. So if we put this all together and I did within an application I wrote called BlockBlock, it's fully open sourced available on GitHub, we can now generically prevent the execution of applications even ones exploiting this vulnerability as a zero day. So screenshot on the slide, we double clicked this executable, this malicious 1302 application that was exploiting the vulnerability as a zero day. The system intercepts the application launch because we've registered with the endpoint security framework invokes our callback. We see this item has been app translocated because the user is launching it and it's from the internet. We check and see that it's not notarized. We then alert the user saying, hey, just to let you know, blah, blah, blah, blah, and essentially blocking the execution of the exploit. Hooray, this is great. So we have a pretty good way to protect against this again while awaiting a patch from Cupertino. But I also wanted to figure out was there an easy way for us to examine a system to ascertain to determine had it been infected or not. You know, answer the question, was I exploited? So I kept analyzing SysPolicyD and looking at the log messages and there was an interesting log message that we can see on the slide, basically says updating flags and then has the path the item that was analyzed was classified and then, you know, a number. And so what SysPolicyD does, we mentioned it's the arbiter, it's the one making the decisions about whether an application should be allowed or blocked. And then apparently it looks like it saves the, or logs out the results of this statement. So I then ran a file monitor, FS usage, while executing the proof of concept and various other applications. And I could see that once SysPolicyD had classified the application, either as should be blocked or inadvertently should be allowed or legitimately should be allowed if it was a legitimately notarized application. It would update an undocumented database called exec policy. On the slide, there's a screenshot of the database. We can see there is volume UUID, something that says object ID, FS name, and then also things like the flags. That's basically the result of the evaluation from SysPolicyD. So hooray, we have this undocumented database where SysPolicyD is writing out the results of its evaluations for everything that the user launches. This seems like a good path to go down. Unfortunately, looking at the values in this table, there's nothing that immediately pops out that points to the path of the item that was run, which is ultimately what we want, right? We want to know, hey, was I exploited and where's the item that triggered, where's the malicious application, right? Well, it turns out that object ID value in this undocumented exec policy database is actually a iNode, a file iNode. And we can confirm that. We can see on the slide, we take the value, the one that starts with 23D, and if we execute the stack command on the proof of concept application that I downloaded, it matches. We can then also take that path, query the database, sorry, that iNode ID, and we can see that it does appear to be the same application. So what I then did is I needed to figure out a way to parse all the rows in that database, in that specific table, and then for each of those, take the volume ID and the file node ID, and then from that, get a full path to the item. And yes, you can do that via the stack command, but that's really rather slow. Well, it turns out there is a foundation API you can invoke, get resource value for a key that given a path that starts with dot volume and then the file iNode will actually return to you the canonical path of the item. So it's kind of a mapping of file node to path, which is exactly what we want. So I implemented this in a basic Python script. The link to the Python script is on the slide. Pretty simple. It basically parses this undocumented exact policy database and for each item in that table, it first resolves the path from the file iNode and then it also checks for if the item is an application that is missing an application, an info.plist file, and whose executable is a script. So in other words, it's basically just looking for these bare-bones script-based applications. No info.plist file, executable is a script. And this is important because in this database, there's a lot of other legitimate items, standalone scripts, legitimate applications that have been run. So it's important to kind of filter out those results. I ran this then on a system where I had run the malicious application and we can see that the Python script was able to identify and pull it up. So that's kind of cool. Apple did finally release a patch. So let's end by kind of looking at their patch, reverse engineering it to figure out how they ultimately fixed this flaw. So the patch was released in macOS 10.13 and was assigned CVE 2021, 3657. Again, giving credit to Cedric Owens for ultimately uncovering and reporting this lovely vulnerability to Apple. So when you usually look for what was changed in a patch, if you know the specific details about the vulnerability, it's a lot simpler. So we don't have to diff the entire patch. We know we can actually just diff the updated syspolicyd and moreover, since we identified the root cause of the vulnerability, we can assume that that's where the patch details ultimately lie. So we can start there and confirm whether our assumptions are valid, are correct. So recall the crux of the flaw was the misclassification of an application bundle, one that was missing info.plist. And this was realized in a unknown subroutine in the syspolicyd daemon. So what I did was I diffed that subroutine of an unpatched system and a patch system. And as we can see on the slide, the unpatched code had 26 unique control code blocks, whereas almost 10 more had been added in the patched system. So this is a really good sign that the majority of the patch was within this subroutine. So we'll start our reverse engineering there. If we analyze the updated syspolicyd, specifically this unnamed subroutine that has the isbundle algorithm, we can see that the classification algorithm has been greatly improved and expanded. Specifically, there was addition of two new comprehensive checks. The first is checking if the item's pass extension is app.app. This is important because if an item is not named dot app when the user double clicks it, it's likely not to be launched by finders. So this is almost a prerequisite to get an application to be launched. So it makes sense to check for this, right? So if we look at the disassembly and also the pseudo code, we can see it's basically just getting the path extension and then checking if that pass extension is app for application. If it is, it then classifies that item as a bundle. Check two is it also checks if the item contains content slash macOS. So even if it doesn't have that dot app extension, it looks for this directory structure. Again, this is a very important check because this is required. Info.plist file is apparently optional, but this directory structure is what defines an application bundle structure. So again, we can see the disassembly and then the decompilation at the bottom. It's essentially building this path and checking if the bundle contains that. And if the item does contain that, it now says yes, you are a bundle. So in summary, the patch added two checks. First, it checks if the item has an application file extension. And then secondly, it also checks that it contains content slash macOS. And if either of these conditions are true, it says yes, you are a bundle. So with this new algorithm, with this improved is bundle check, this algorithm. If we run the proof of concept vulnerability application, we can see that macOS now correctly classifies it as a bundle, which means its evaluation type will not be two, it will be zero, which then triggers the rest of the notarization gatekeeper and file quarantine checks, which unsurprisingly now blocks the application because it's from the internet unsigned and non-notarized. Let's briefly wrap this up with some conclusions. First, a key takeaway, which I really want to reiterate. And that is hopefully this illustrates that macOS still has a ton of shallow bugs. We talked about the fact that there was a very popular developer packaging script on GitHub that would inadvertently trigger flaw. So you didn't have to do some crazy fuzzing or reverse engineering if you actually just packaged up your application, your script with this, it would trigger this flaw and bypass all of Apple's anti-infection mechanisms. And we see this time and time again, really to me illustrates that large components of macOS have never been audited. And there's a lot of very low hanging fruit that still can be found. And these vulnerabilities while shallow are still very impactful, right? Being able to bypass all of Apple's anti-infection mechanisms, that's huge. And again, as a logic flaw, 100% reliably. We also in this talk talked about the root cause analysis of the vulnerability. Talked about how Apple and macOS does application classification and did so incorrectly. We showed that unfortunately attackers were abusing this flaw as a zero day in the wild, but luckily we were able to provide some protections and detection strategies while Apple was awaiting a patch. And by reverse engineering their patch does seem that they comprehensively addressed this flaw. I also hope this talk inspired you or gave you some ideas, some tools, some techniques for you to go out and do your own splunking around the operating system, your own reverse engineering, malware analysis or even security tool development. And if you're interested in these topics, there's some more resources I briefly wanted to share. So I mentioned I'm the author of the art of MacMower. It is a book on analyzing MacMower. It's free online. So if you wanna learn more about MacMower and how to become a proficient MacMower analyst, check it out. Again, free open source. I also organize a Mac security conference that's coming up at the end of September. A lot of really amazing speakers talking about Mac and iOS security topics. So if you're interested, check that out. Finally, I wanna thank, first and foremost, you for attending my talk, either virtually or in person. I also wanna thank the organizers of DEF CON for putting together this conference, especially in these trying times. And finally, I wanna thank the companies who support my research and my tools, allowing me to release open source tools and share my research with the world. So again, thank you so much for attending my talk. Stay safe and see you next time.