 research and an entrepreneur who does a lot of things with security. He learned how to use MS-DOS at the age of seven and that was when he also started to master his father's computer. And as you will see now in his lecture, he likes to take things apart and he will present you now how intermediate code transformations can be used to upfuscate code with all the advantages and limitations this has. So please give a warm applause for his lecture, an LLVM upfuscator, yeah for Klondike please. Wow, I'm actually surprised that there is people to see this because this is research from like five years ago so I was not even expecting that it could be accepted. Well, before I start, everybody remember to drink water, it's really hot outside. I know I'm a Spanish, believe me, I feel like at home here, so okay. My name is Klondike and well this was my master's thesis five years ago and I'm going to try to explain you how you can use LLVM in order to upfuscate code. When I started working on this project, well LLVM was not a big, suits a big thing as it is now because now you can actually compile JavaScript with LLVM and this is actually a really useful way to, for example, prevent anybody from, well, looking at your JavaScript code. But my main objective with this talk was actually making it easier to prevent attacks when you send a patch to a specific application. So, well, here is the typical mandatory slide I usually have to put so I can actually get paid by my company, which I actually own legally for giving this talk here and paid for my ticket. So, yeah, I work at the company, blah, blah, blah, let's just skip that because I don't think anybody's interested on that. I will introduce you a little bit to how LLVM works and what offuscation is so the people that have almost no knowledge can probably follow the talk. I'm going to talk about different offuscation techniques. Then I will talk about how you can use reproducible offuscation in order to try to delay attacks against patches that you have already released. And then I will try to do a full demo that, well, it worked on my computer, but now not even the mouse is working on my computer, so I hope it works. LLVM is basically a framework for compilers, a compiler, as you probably know, is a tool that picks code in one programming language and returns code in other programming languages or in assembly or even in machine code. It's really cool because it provides a lot of tools to work with an intermediate representation of the code you are translating. So, as most modern compilers, LLVM basically picks coding with several languages you want to use. Then it converts it to its own internal language and then finally converts that internal language to the real code that you are going to use afterwards. And, well, the best thing is that it's really, really useful because if you write one obfuscation transformation or some code that will obfuscate the resulting code in handling the intermediate representation, then all of a sudden you can use all of the compilers that use LLVM and all of the outputs that LLVM can use. C-lang in particular is the frontend for using CSI languages. There is another one called Dragoneg that is based on GCC. And it basically generates LLVM intermediate representation code out of C, and, well, C++ and other languages like that. About obfuscation, well, CNC-1 is basically the main author of which I inspired my work. But there is, of course, a lot of prior research mostly from malware developers. I guess that if any of you has to work with doing reverse engineering, you probably know that. And the main objective of obfuscation is basically making and clear what the programmer is trying to do with his code so that when you try to figure out what it's doing, it will take you longer, and therefore it will be harder for you to be able to do something useful out of the code that you get. Here are some obfuscations techniques that are used. Control-flattening basically tries to make sure that all the control flows and that is basically the place where you jump after you are done executing some instructions, end up on the same place, and that specific place will then choose whether you want to continue executing the code. That makes it really, really hard to figure out, for example, how a group of ifs and while's work, because if you have a program with quite a few quite a few control flow instructions like twos, all of them will end up on exactly the same place, and then you will need to figure out out of a value that will be on a register, which is the path that the execution will follow afterwards. Constant obfuscation is also a pretty classical technique. I guess that if any of you has been doing CTF challenges, you probably know it. The classical one is picking up the string and exerting it with a specific value, but there are quite a few other techniques. You can, for example, store some of the constants in memory and then fetch them so that the constants get more spread away. You can also, for example, use additions, subtractions, and in general, any operation that is reversible in order to get back the original code, the original constant that you had. Then you have opaque predicates. Opaque predicates are basically functions that will always return the same, no matter which value you put as inputs, but are really complex and therefore are really hard to debug. Then we have register swapping. It's a really old technique, especially if you want to get polymorphins. That is basically code. That is different, and it basically tries to change the registers that are used on the different instructions. Register swapping, obviously, is impossible to do in intermediate representation because there is absolutely no registers there. Basically, you have an infinite number of them and you have to assign one new register every time you execute or you put an instruction. But I will show you how you can actually do some tricks in order to make the resulting code to end up in register swaps. And finally, we have that code insertion that basically involves putting code inside of your code that does nothing and that will never be used. And code reordering, which is basically a way to make sure that you get a lot more different programs out of the original program by using the instructions in different places. So, sorry. Okay. I'm sorry. My wallet manager just pooped in because I had to restart my computer. So, well, as you can see, basically, the main reason why we want to use LLBM mixed with obfuscation is because we can then use these obfuscation transformations in a lot of different places. We can use it with JavaScript code. We can use it with C code. We can even use it with Haskell code. I think Rust also uses LLBM as a backend nowadays, but my memory might be a bit flaky. And then you can also use the resulting code in a lot of different architectures. I mean, even the Radeon processors, sorry, the Radeon graphic cards are supported by LLBM backend. So, you can get an idea of the broad range of architectures that you can use when you use LLBM. And the main reason for this is that you only need to write your code once and then use it in a lot of different places, which makes life a lot easier. Of course, there is a really big drawback, and that means that it's really hard to use very, very architecturally specific obfuscation techniques here, in which case, well, you probably want to write a specific tool for the architecture you are going to use. So, control flattening is the first of the techniques that I am going to talk about. As I told you, basically, the idea is put all the control flows inside of the same place, and then on that place decide where you are going to continue execution. The way in which we do it in LLBM is using a fee node in order to choose a specific value that will depend on the source node, that is, basically, which of the different basic blocks or possible execution blocks the execution flow is coming from. In case that we have an if or another conditional instruction, then we can use a switch beforehand in order to choose one value or the other without having to add a new branching instruction. And the main idea, basically, is that we have to map our conditional instructions into specific conditional moves and other conditional instructions so that the jumps will always be striked to the original block. Another trick that is really important to have here and that helps a lot is to add an initial block at the beginning of the execution so that all the initialization of the function will not be at the beginning, but will be after the first jump. How do you detect that? Well, there is always going to be a really big note in your call graph that everything goes through, and that is basically the main way in which you can detect if that kind of technique is being used. And you will probably see, at least with my specific tool, that there is absolutely no conditional jumps going on. All of the jumps will be always some conditional. How do you reverse it? Well, if you can calculate the destination after the second jump, that means after you have gone over the main block that decides where the execution flow will continue, then you can regenerate the original call graph, and by doing that, you will be able to get the original code. If anybody is going to ask me, no, I don't have code for that because my main work here was on obfuscation techniques. This is basically in case anybody wants to follow these research. And well, you can also convert the beings of a conditional instruction plus a jump into a specific conditional jump. So, here is the first demo, or well, in order to explain how this works, this will... If we go here, I need to open a div3, and I guess that, yeah, of cascade constants, not control flattening, they're going to go for the two of these files. So, here you can basically see... Let me make it a bit bigger if I can. Okay, basically here you can see the difference between one function and the other. Here in the left, you have to cascade this code and on the right you have the originally generated code. As you can see, basically we have a main block, and every other function we have, sorry, every other jump that we have on here, jumps back to the original code. So, you will see a lot of jumps here going back to the main block. If we continue, we can go over constant obfuscation. Constant obfuscation is another technique to make it really unclear what you are doing. For example, it's a lot easier to understand 3.1415 being p, as compared to having, for example, the excerpt of two different values and then converting that into a floating point number. The way in which it works is basically that we choose a constant that we can obfuscate because there are a few constants that have to remain constant inside of LLBN code. Then we pick one of the techniques. In my case, I only have implemented two. One of them is obfuscating the constants using different arithmetic operations or using an access to a global variable that contains all of the constants in memory. Then when we are done with this step, we can decide to obfuscate everything again. We do that in order to make the code a lot larger and a lot harder to follow. You have to be really careful when you do that because if you choose code with a really high probability of obfuscation, you might end up filling your memory out of useless code and making a really large program that you cannot continue processing afterwards. So the way in which you can detect it is that there are basically going to be really few instructions that actually have constant parameters. And the ones that exist, they are usually not properly optimized set out, which is a really weird thing to have in most code. And you will see also a lot of memory access only for constant values. And another problem I have is that I am not used to work with two screens at the same time. Sorry about that. Okay, so the demo that I have for this one is this one, so start.s, capital S. So here you can see an example of the differences between one function and the other. If you go to the beginning, you will see, for example, the original code on the right is fairly simple. It will just do a printf of the number of arguments multiplied by a different set of constants. One, two, three, four, five, etc. If you go on the left, you will see that, for example, in order to calculate the value two, we are doing approximately... We are basically doing an XOR in order to get a value and then we are multiplying or shifting by the constant number one, which is basically a multiplication by two. So as you can see, this is a really good way of making a lot of useless code that will be hard for a person that is trying to reverse to understand. It, of course, will affect performance if you abuse it too much, so you should always be really careful. Okay, so to continue with this, then the next... Okay, so the next technique that I am going to talk is about splitting basic blocks. For those of you that don't know, a basic block is basically a set of instructions that have no jumps inside, so a completely linear set of instructions. Do you know that the control flow will always start at the beginning of the basic block and end at the end of the basic block without getting hold of that block? Well, that's not exactly true because in LLVM, some parts can actually be considered part of a basic block. But in general, that is the idea. The idea is that what we do is that we use divide these basic blocks in the middle by adding some extra conditional branches because that will make things like control flattening a lot harder to the back because if instead of having 50 lines of code doing the initialization of the program, you have five different blocks with ten lines each. It's going to be a lot harder to understand what it is doing and it's going to take significantly more time in order to figure out what the program is trying to solve. The way in which this can be detected is because you will see a lot of unconditional jumps inside of your code. A lot of them will be totally useless and that is what makes the change to be really clear because there is usually no reason to put an unconditional jump when there is only one pattern for that specific block of code that you are going to execute afterwards. And the way in which you can obviously reverse it is by merging these kinds of blocks into a single one. This can also be automated. As I said, this was not part of my work so hopefully somebody will do it at some point in the future. Then we can also reorganize these basic blocks into completely different fashions. This is a really trivial transformation and the reason why it works is because each block is completely independent from the other. So as long as the first block, which is the entry block, is the same, everything else can be reordered in the way you want. And it can be detected because usually when you generate code with a compiler, you will try to follow the code that you had originally from top to bottom. So it's quite unlikely, for example, that you will find the returns early in the code. Instead, it's more likely you will find the main return out in the lower parts of the code. And you can probably try to reverse it, at least in part, by using a more breadth-first search ordering when you are outputting the original blocks into the new code that you want to generate. I will show you the demos of these later because that's probably going to speed up the talk a little bit. As I am having a lot of problems here. Run-ins basically tries to keep track of all the dependencies between the instructions and then reorders the instructions completely randomly. The idea is that you pick one of the instructions that have no dependencies, put that instruction the first one, or well, the last one is outside of the basic block, and then afterwards, you will go mark out the instructions that have no more dependencies as a label to be issued and repeat the process. If this sounds familiar, it's because if you have been working with hardware architectures, this is basically what Tomasulo algorithms tries to do inside of an out of order processor in order to speed up code execution. But instead, we are doing this inside of the compiler, instead of the processor. It's a bit hard to detect this kind of thing because the way in which a compiler will issue the code inside of a basic block, especially after optimizations, it's fairly, fairly random. But you can either try to detect that the code is really, really, really unoptimized or really weird optimizations, or you can try to find if you have a code that should be much farther away below being integrated into the middle of a really complex of a really complex expression that you are calculating. It can be reversed if you try to set up a proper ordering on the way in which instructions should be issued. And, well, that will not completely defeat the fuscation technique because the code will still be different from the original one, but it will make it a lot easier to read the code afterwards. Then we can also reorder functions randomly and that will make the code vary a lot This is usually not a good thing if you want to issue small patches, but it's really useful if you want to make the code a little bit harder to follow because most programmers will try to have a more or less good ordering on how the functions work so that, for example, you declare functions that you are going to call first and then the functions that are calling them afterwards. But you have absolutely no reason to follow this kind of ordering. You can put the functions in any order you want and that is exactly what we are doing here. Same thing with globals. You usually put globals in a specific ordering in your code. For example, you put them close to wherever you are going to use them, but you can just place them wherever you want and change the order completely. These ones can be detected easily, but they can be reversed, for example, if you order all of the functions and the globals alphabetically. And by doing that, basically, you kind of defeat the whole transformation when you try to detect changes between the original code and the code that was obfuscated. Then we have finally the swap ops transformation and this is a way to get a lot of more polymorphins and it's the closest thing that we can do to a register swap when using LLBM intermediary representation. The idea is simple. If we, for example, are adding one plus two, we are going to get the same result if we do two plus one. So we mark the operations that have their operands swapped and randomly decide whether we want to swap them or not. So the code will look different every time that we execute this transformation. Unless, of course, we do exactly the same choices. The way in which we can reverse it, well, you can try to reorder the operands using properly defined ordering and, for example, that means that you can put first constant accesses, then a specific register accesses, and, for example, try to put the register accesses that are closest to the function first. That will make the code always have the same ordering and that means that different versions of the code applied that have had this algorithm applied will always look the same. Of course, this is not as easy as it seems once you convert the code to assembly because you will need to get back the original LLBM code out of the assembly code. So now that we are done with this part of the talk and before I go talking about why I actually did this work, I think I'm going to try to show you the transformations because that's probably a lot more interesting for you. So I have shown you already the control flattening until a constant obfuscation transformations and what I am going to try to show you now is the, well, the other ones, for example, the drastic block splitting first and then the random ordering once. So, for example, if we open this code, I'm going to put it on the screen for all of you to see, this is basically the BB split function and as you can see what was at the beginning had really a straight forward code that had absolutely no branches. It's becoming a lot more hellish code to follow because you have a branch every few instructions there. As I said, this by itself is pretty useless because this is really easy to reverse and it doesn't provide much of an advantage but if you apply control flattening afterwards, this transformation becomes really, really strong because the control flattening will force the person that is trying to reverse the code to go back always to the main node that will decide where the execution flow will control every few instructions. So that is one of the demos. Then the next one and a half is for random ordering of basic blocks and you will see that this one is actually a really simple one also. So, if I move to the screen here, here you can see that of the execution flow is mostly the same, the functions change a lot and the reason why they do that is because now we are issuing instructions in a random order. So if we check the right code, which is the original code, you will see that basically the control flow of the function is pretty much linear. You have an ETH, then you decide to go to one else or to the other else and then you go to the end. But if you could check the code on the left, you will see, for example, that the SDRCMP that was used as part of the first node, second ETH, at the beginning of the code, now it's at the bottom and what happens is that the function, the jump that was at the beginning of the code now decides to jump and wait back to the bottom instead of staying on the top. So, as you can see, this will make it a lot harder to follow the control flow because it will make it significantly harder for the reverse engineer to see how the code was originally a rand yet when it was compiled. In a similar way, if we use the instruction randomizing transformations, you will see a similar issue and what you can see here is that the expressions that we originally had have a completely different ordering in the code that we have obfuscated. If we run different iterations with different obfuscation keys, as you can see here, then the code will always look fairly different from the prior compilation which is basically our objective here. So, if we, for example, use this one where you can see that I run a different obfuscation key, you can see that a lot of the loads and a lot of the arithmetic expressions that go below have been ordered in a completely different way. It's the code that is marked in yellow and in green on the left and in yellow and in blue on the right. So, the objective of this specific transformation is making it significantly harder for an attacker or whoever is trying to analyze our code to see what the original control flow of our program was. And here you can see a third example to see that it actually makes really big differences anyways when you change the key. So, if we go to the swap ops transformation which is also really interesting, you will also see a similar behavior here. So, this code has a significantly smaller number of operations where I can actually do the swapping. But here, so here you can see that in comparison with the original code we are swapping some of the of the operands that we are using on some of the ads and the multiplications because they don't need to stay in that order as both of the operations are commutative. Another operation that is also commutative is the XOR. And yeah, I know I have 15 minutes left. So, we can also see that the result is very different if we change the keys that we use for the buscation or better said, the way in which the random number generated is going to work. So, here you can see that we get a different swap by changing one of the bits of the key. And here you can see that we have another different one when we change some other parts of the key. So, the idea is that every time you change some parts of the key you will get different codes generated at the end which will make it a lot harder for an attacker to figure out what has been changed. Then we finally have the demonstrations for the random function and globals and well, these are pretty trivial. So, this is the original code and the first case code. So, here basically you can see that we have a lot of globals at the beginning of the function. In the left they are ordered alphabetically because that is the order that they had on the original code. On the right as you can see they are ordered pretty much randomly. And the same thing happens with the functions that go below. They get a completely different order in every time when this transformation is executed. And we can actually see that the ordering changes every time that we compile using a different obfuscation key. So, here you can see a completely different version of this same code. And finally I can show you even a third one. So, you can see that the code changes positions a lot every time you use this transformation. So, well, this is the first part of the demo. I will show you a full demo using all of the transformations a bit later. And let's see if I can get my... Yes, perfect. So, let's talk a little bit about why I did this thing. Usually when you have a vulnerability that you want to patch attackers need to figure out what has been changed in order to try to exploit that vulnerability. Supposing that it's not already known. And that basically means that if you, for example, sit down on Wednesday patch day for Microsoft and check the little pieces of code that are changed, you might be able to find a buffer overflow and you start as the basis of a new exploit. So, the main problem we have here is that compilers usually generate the same code for the same inputs. Okay, please, now it's when you can laugh because you know that it's not actually true and that is the reason why, for example, Debian is putting a lot of effort into getting a compiler that will always return the same code so that you can actually verify that the compiler works the way you intended it to. But most of the time they actually do generate code that is the same or pretty similar because the amount of changes that is introduced that is too small and that means that the changes will at most affect the function on which they were introduced and not the word code itself. So, what we can use is that in order to be able to really generate always the same transformation, we can use a cryptographic random number generator that you see that always with the same value. As long as the value is the same, this will work. Of course, in order for this to work correctly, you need to make sure that the cryptographic random number generator has a different seat every time that you use it. In my case, I usually use a different seat for every module and for every function that we are going to obfuscate. The seat is basically decided based on the number on the name of the function, the name of the module and a specific obfuscation key that is passed on compilation time either as an extra transformation pass or as an attribute of the function. The second problem we have is that, well, you can still reorder a code to find what was changed. So, that is why we actually implemented the obfuscation techniques here, while we used control flattening and constant obfuscation so that even if you reorder the code, it will still take you a longer time to figure out what exactly was changed. And finally, the problem we have is that, sorry. I think I can fix it. Yes, let's see, okay. Well, on retrospective, if I told you about how the rest of the week has been, this is the best day ever. Anyways, patches should be small and we cannot change the whole program every time that we do a compilation. So, that is why we use specific keys in order to do the obfuscation. Thank you. So, basically the idea is that we will only obfuscate specific functions or modules that we tell the compiler to obfuscate and they will be obfuscated in exactly the same way every time we do so, as long as we use the same set of obfuscation transformations and the same obfuscation code. So, the idea is that if we have to patch a code that we have already patched, then you will get the same code all of the time. So, that's basically, well, the voting part of the talk. Now I'm going to go for the real demo. I am going to try to show you how hard to follow can be a simple Hello World example. And I guess that for that it's better if I move my console here. So, we cannot see it or even better if I put the screen unified. Yeah. So, now basically you can see the same thing I'm seeing. So, that will make it a lot easier to follow. So, let's make the code a little bit bigger so that you guys can follow what I am doing. And yes, I am one of these evil, evil people that actually names his folders in Spanish in case you're wondering. Be happy that I don't speak Oskera. So, here we have the demo. And the code that we are going to obfuscate here is this simple Hello World program. So, as you can see, it basically has a small array of 1,024 bytes. And it's basically leaving her name and it will use a Hello name. And if the name is 0, then it will use a world. I mean, if you don't input anything, then you will just get world. So, yep. If we go and compile this code, which we are going to do here, the first step basically will generate an LLBM in terms of representation version of the code because I want to show you that despite the code is always the same as it is generated by the compiler, the results will be different when we change the obfuscation keys. And what the second line is doing is that it's actually compiling that intermediate representation. So, as you can see, if we execute the resulting program, this works as it should. So, now we are going to go for the fun part and start doing the obfuscation passes. This first line of code is basically trying to do all of the transformations in an order that I found worked really well. So, first we add a key to the world compilation unit, which is going to be the example key. This key is used in order to see the cryptographic server-random number generator. Then we are going to propagate this key at all. We are going to split the basic blocks. Then we are going to flatten the control flow graph. We are going to cascade the constants. And we are going to reoffuscate the constants one out of every five times. And we choose such a small value in order to prevent exponential growth of our code. Then we have random instructions. We are going to apply a randomizing transformations. And finally, we are going to compile a lot of it into a program that will work. So, and as you can see, things aren't working as I expected. I'm interested in why it's not working here because I didn't generate assembly code. Yes, thank you, third one. Go very close. Okay, so, see, this is what happens when you prepare demos really late in the afternoon or in the night, as I said. This is the original code, and now it should work as it should. So, no. Well, fortunately, I already have the compiler code here. So, you can actually see what is going on. So, this is the obfuscated version of the code. And as you can see, it works exactly the same. And we have different versions of it. If we obfuscate with the same code, with the same obfuscation key, thank you, we are going to go to the, we are going to get exactly the same values. So, for example, if we go to the off and off new, which used the same obfuscation keys, as you can see, the subvalues are exactly the same, which means that the code that was generated is the same. But, on the other hand, if we use a new obfuscation key, the code is widely, widely different. So, just by checking the resulting assembly, which obviously doesn't work if you use the nonminari files. Yeah. You can see that we have quite a few differences between the original control flow program and the resulting one, especially when we go to the main function. Basically, none of the code looks at all like what the one we had before, and the new code is significantly longer and a lot harder to follow. And similarly, as you can see, when we changed the obfuscation key, now the code is on the left, we have a lot of changes also going on into the resulting function. So, this is basically the result of using this compiler. And since I still have four minutes left, I think it's better if we go for the question rounds because I am not fully sure I will be able to fix this on time. Okay. Thank you very much. And a warm applause. So, yeah, we have another three minutes for questions now. We have two microphone angles in the middle of our hall. So, if you have a question, please move to them. And so, you go to the microphone and then you can ask questions. Please, questions and not so many comments if possible. Okay. Thank you for your session. I got one question. What's the overload, size and runtime overload after obfuscation? Thank you. So, the overload in size depends a lot on the specific values that you use when you are applying the transformations. If you, for example, use the slow values for the BBS split and all constant obfuscation adjustments inside of the code, the overload will be reasonably small. Maybe you will get a two-fold size increase in the code. When it comes to performance, at least in my tests, I could not notice a significant difference at all. Most likely because the amount of code that was added was also very highly non-interdependent with the rest of the code and that resulted in the processor being able to optimize most of this code. In fact, a really important thing to take into account here is that optimization passes can totally destroy obfuscation, which is the reason why we were performing first optimization and then the obfuscation steps. Thank you for the question and for coming, actually. Are there any more questions? Come on, don't be shy. I will not bite anybody's task. Any question, no matter how stupid it might seem. It doesn't seem so. So, thank you very much, Klondike. Oh, wait, I think we have one over there. Is there someone coming? Yes. Sorry, I don't know if I hear the answer for this one, but is this project open source? Do you have a link? Because they didn't find in the talk description or anything that they could just take a look. Yes. No, actually, I am a really able person and I decided to make this project completely closed at source as it's usually done in academia, and you can absolutely not get this code. And this is absolutely not a sarcastic remark. Thanks for reminding me because I actually took half a slide that contains all of the links that you guys might be interested in. This code is indeed open source, and I really would love to see you guys using it, asking me any questions you have, because I could not have actually come here if it wasn't because of that. So, as you can see, here you have the... This is the link to my master's thesis, and here is the link of the code that I used for this demo. I will probably release a slightly outdated version because I have taken some of the time at night whilst you guys were getting drunk and partying in order to get a proper working version for trying to do these demos that you have seen today as I had lost the original one. So now the code at least works with LLBM 3.4.2, which is old, I know, but, well, it's at least one stable version that you can work on. And that is easier to get than the bleeding H version of LLBM that I originally used when I was doing this work. So, hopefully that answers your question. Yeah, yeah, yeah. Perfect. Thank you. Yeah, thank you. Thanks, thanks, actually, for asking. I had absolutely forgotten to put those links. Okay, thank you, Klondike, and a warm applause for you and your work. Can I just say one thing? Yeah, you can say one thing. Thanks a lot to all of you that have come, and especially to the people at Bornhack, because I gave a significantly worser version of this talk there that is what made me... Allowed me to have a much better version here. So, thanks a lot, especially to you and the really nice video girl that was helping me a lot. And to the poor translator over there that is trying to translate some of my jokes, I'm a really thick Spanish accent. So, thanks a lot to all of you. So, thank you. And yeah, a warm applause for our translators. It's always very good.