 act to beat, actually. But I'm going to try. So welcome, everyone. My name is Dan Koeper. I'm here with my colleague, Thijs Alkomade. We work for a company called Compitest, which is a full-service security provider in the Netherlands. We do everything from presentation testing to instrument response. But we do run the research facility of Compitest, which means we get to hack school stuff and talk about it at conferences. We did a talk yesterday as well. And tomorrow, we'll be talking about hacking macOS for a local privilege escalation. So if you have any questions after this talk, feel free to get to our tent. It's on the retro square. We have a tent full of arcade games if we're into that. But we can also have a beer, of course. So today, we would like to talk to you about something we did last year. Last year, we participated in Pone to Own. And we won by demonstrating a zero-day attack against the Zoom message clients. So that meant that if somebody had the app installed and running, we could remotely get code execution on your machine. So we want to share all the details. But we have very little time and a lot of ground to cover. So I will give the slides to Thijs. And he will quickly tell you the vulnerability we found. We mainly used in our exploit. And then we'll get in all the nifty details about how we actually exploited that vulnerability. Thijs. Yeah. So the vulnerability we found is in the chat functionality of Zoom. So many people know it for the meetings that they can have and just send a calendar invite. And people can join it. But Zoom actually has a very well-featured chat client built in as well, which many people don't really use that often, I think. And when we started to figuring out what the features are, that Zoom has when we started our research, we noticed that Zoom basically uses a couple of different protocols depending on what you are doing. So there's the XNPP, which is used for that chat, the Out of Meetings Chat. There's an HTTP API for specific operations, like if you're logging in. And they have a custom binary protocol for meetings, which is UDP-based. And pretty early on, we decided that we thought that the XNPP connection would be the most interesting for us. Because by design, because XNPP is extensible, you can just basically send arbitrary XML directly to another user. And often, the server doesn't really care about it. It just forwards it to the other end. So a little enforcement by the server. There's a couple of things they block, but really not that much. And because XNPP is an open standard, it was already quite familiar with it. It took less time to reverse engineer than the meeting protocol, which would have taken weeks to figure out just the basic structure of it, probably. But with XNPP, we directly knew that already. And what we noticed is a very obscure setting. It's the advanced chat encryption. It's a setting that I think barely anybody uses. You can only enable it on the website. You cannot do it in the client. And when you do it, all of your messages are encrypted in the chat. And it only exists for paid accounts. So if you have a free Zoom account, then you cannot enable it. And also, a lot of features in the chat no longer work. I think you can no longer send gifs to somebody if you enable this. And what's even more interesting is that there were two versions of this protocol in the code. There was an older version 1 and a version 2. And if you started a new chat, then it would start version 2. But if you send it a version 1 message, then it would still respond to that. And the protocol was still implemented, even though it was no longer in active use. So if you combine all of this with the fact that it was using OpenSSL, then we thought that this was a good area to focus on for our research. Because it's a conversion between C++. Most of the Zoom custom code is C++. It has to be converted to C for the OpenSSL API. OpenSSL is, of course, open source, so we can easily read the documentation or even the source code for it. It's a relatively hidden feature that many people don't know about, so probably developers don't care about as much either. And then we have an older version, even, of that protocol. So probably nobody really looks at that part of the code. But that means that it's very interesting code for us. Now, the way this chat encryption works is that once you send a message, generates a new encryption key and then encrypts the message with that key, with the symmetric encryption key, and sends it to the other party. And the receiver then sees a new message with a key they haven't seen before. So at that point, they're going to retrieve it from the sender, so they request the key using a request key message. And then the sender of the message replies with the response key message. And in this message, they encrypt the message key using the certificate of the other user. So in the request key, there was this certificate, and then they encrypt it to send it to the other person. And what we found in the code was that they allocated a 1024-byte buffer for this key. And then they had two variants in the code. There's RSA, and then it works perfectly fine. But it has also implemented some elliptic curve option that used some elliptic curve Div Yelman and then some AES. But in this case, they did not properly verify that the length of the decrypted result would fit into that buffer. So what we have here is a typical buffer overflow because they tried to write too much data, arbitrarily long data, into a fixed-size buffer to make it a bit more visual. So we, as an attacker, are sending a message to Zoom. And we like to send an encrypted message. And then Zoom says, well, sure, send me that encryption key, please. And then we send a very long encryption key. But Zoom then tries to put it into a buffer with just a fixed size, which triggers this buffer overflow. Now, this buffer overflow has a couple of pros and cons for writing that exploit. So one thing that's nice about it is that we can just send as much data as we want. There's no practical limit on what we want to override. There may be something in the network traffic or IxMVP packet, but it's a lot of data. So it's all no restrictions on the character set. We're not limited to U2F8 or something like that. And also, very nice for the competition, we can trigger this using only chat messages without the victim having to do something. They don't need to approve anything. They don't need to click anything. As long as they allow us to send them a message, which is one step they had to do it before, or they are part of the same organization, we can just send them this message and the buffer overflow triggers. But there's also a couple of cons which make it harder to exploit this. Now, the allocation size of that buffer is 1,040, which is a little bit more than 1,024 due to some overhead. But it's fixed. So we cannot make it larger or smaller. It's always that size. We're also limited to how long we go because it needs to be multiple of 16 due to certain reasons with the encryption. But the most tricky one here is the last one. So the buffer is created. And then we trigger the buffer overflow all when handling one chat message. So the reason why this is making us work harder for this, normally when you trigger one to exploit a buffer overflow, it's nice to put something in memory and then put something after it and then use the first object to overflow and then overwrite the second object. But you couldn't do that because it was created in an overflow in just one message. So whatever we wanted to override, already had to be there in memory. And this really made the exploit a lot harder. Now, we're going to walk through the various steps of the exploits. And then first down, we'll talk about heap grooming. Sure. Yeah. So there are four steps in every exploit. The first is grooming the heap. I will talk about that in a minute. Then you need to leak some information. We call it an information leak. That's because most things are randomized in memory. So you need to know where stuff is in memory. Then you need to hijack the control flow. You want to determine which function the application calls. And then you build what we call rub chain to actually execute code of your own. So let's start with the first one, the heap grooming part. So the goal of this step is to make sure that the application doesn't crash and make room for what we need. So we cannot trigger a buffer overflow. So we can override arbitrary data that's on the heap. However, we don't want to override random data. We want to override some very specific data that we put there. So we need to make sure that whatever we overwrite is behind the buffer that will eventually be allocated in an overflow. The tricky part here is making sure that the right object is behind our buffer in the heap. Because if another object is there that we didn't expect or encountered for, then the most likely outcome is that the application will crash due to some memory invalidation. So we need to make sure that the heap is in a predictable state. So for those who are not too familiar with the heap, the heap is something in software which is used for dynamically allocating memory. So if you at compile time don't know the size of certain objects, you typically use the heap for this. Those who know C might know the malloc and free function. Those are used to either request some space on the heap or return the space once you're done with it to give it back to the heap allocator. So how the heap is actually implemented varies by platform or even by any major or minor software update. So we were targeting Windows, and Windows has two implementations for the heap. The first one is the old NT heap, which is still commonly used. And you have a new segment heap, which is only used for, I think, metro applications and some other applications. But almost all applications use the old NT heap, and Zoom was no exception. So how does the NT heap work? Well, free blocks are ordered by size in a linked list. And whenever you request a block, it will walk that free list, and it will take the first block that fits your needs. So if you request 2,024 bytes, it will look for the first block that contains at least 20,004 free bytes. And if you deallocate something using free, it will merge adjacent free blocks. So if you're a block that you just freed is adjacent to a block that was already free, then it will merge that to making one bigger block and insert that to the free list. One particular thing is that the NT heap has also a function which is called the low fragmented heap. This is used for allocation size that are very common. Then it uses a whole different heap algorithm. The low fragmented heap is triggered whenever you do 17 allocations of a certain size. So whenever 17 allocations of a certain size are done, all requests of that specific size are handled by the low fragmented heap. There is no going back to the NT heap once a particular size is sent to the low fragmented heap. So the low fragmented heap is used for common allocations sizes, and it is done so that you have less fragmentation, so less free space that you can no longer use on the NT heap. So similar sizes are handled by what we call a bucket. So in a range of 80 bytes, everything between there, so like 1,024 bytes up until 1,080 bytes, are all handled by what we call a bucket. And the thing with the low fragmented heap is that it is not deterministic. The NT heap is deterministic. You know, you can predict how the free list will look like, and it will always pick the same block so you can reason about how the shape of the heap will be at the other end. The low fragmented heap uses randomization, so it has no fixed block that it will return if you request some memory. And this is specifically done for making exploitation more difficult. So if something is in the low fragmented heap, it means that precise grooming, so making sure that the block behind our buffer it contains what we want it to contain, nearly impossible because due to the randomization. I'm going to demonstrate that here. This is the low fragmented heap for a particular size. It now has three buckets, but it's only using bucket one. And if you request something, it will just pick a random block in the first bucket up until the point that the first bucket is full, and it will then allocate randomly in the second bucket. So our original plan was that the buffer we can overflow is 1,040 bytes. And we tried to use the NT heap to shape the heap in a predictable state. But this is very difficult because the Zoom application, it creates hundreds of allocations every second. It talks to API endpoints, et cetera, et cetera. So the heap is really messy. But even if the application didn't do anything else, even sending a single message made it the heap in a very unpredictable state. Because for parsing a message, it would copy every string 10 times, et cetera, et cetera. So shaping the NT heap was proven to be very difficult. But with a second problem, and that was every now and then, the size we used, the 1,040 bytes, was already handled by the low fragment heap. So if that was the case, then all our shaping efforts were useless because it was handled by a completely different heap implementation. So if you read online about memory exploitation on Windows, they always say, OK, make sure that you're on the NT heap. Because once you're on the low fragment heap, exploitation is near impossible due to the randomization. But we couldn't make it work on the NT heap. So we thought, OK, there are people saying that, but we're going to try it anyway. So what we did was we made sure that the buffer size that we could overflow was on the low fragment heap and writer exploits working around the randomness of the low fragment heap. So how we did that was by making sure that we were the only thing that was making allocations in a certain bucket. And by just running our exploit multiple times, it could happen that in one of those attempts, the randomization didn't matter anymore because the allocations were right next to each other. So what we did was the bucket one has some random objects in it already. We don't know about because the application is an unknown state before we start our exploit. We sent a lot of messages from a particular size to make sure that all buckets are full. Then we deallocate most of the last messages we sent. And what that did was it made sure that bucket three was near empty. But the only thing we didn't delete was the last message we sent. So we made sure that the last message was in bucket three or how many buckets there were on the system at that time. But because we didn't delete the last message, bucket three was still the active bucket. So all allocations will now be handled by bucket three. And due to the fact that the buckets we chose, the bucket size we had was relatively quiet, we could run the exploit relatively safely multiple times to try to override the right object. Thais, can you tell us a little bit more about the information leak? Yes. So the next step for the exploit is the information leak. And the reason why we need this is that there's all of these annoying protective mechanisms in operating systems nowadays like ASLR. And this is a protection mechanism, address space, layout, randomization is what it stands for. And what it does is that it places everything in memory at random locations every time you launch the application. So we needed some way to know where certain things are in memory, like certain libraries, to use the code from that library. But it was randomized, so we needed to find some way to leak that information back to us. But this was quite a challenge because the buffer overflow we had was something that can write to memory. And we somehow had to turn something that can write into something that can read, like trying to read a book using a pen. So we had to combine it with another low-impact vulnerability that we found. At some point, we had decided that we wanted to leak an address of libcrypto. This is one of the libraries included in OpenSSL. We'll talk more about why we specifically wanted that library later. And the objects that we wanted to use, they had to fit in the bucket, otherwise our vulnerability doesn't affect them. But we also needed a way to get that data sent back to us. We needed to know what the memory contents are. But XMPP means that it's XML that has to go through the server. And if you send something that's not valid XML, then the server will close the connection, and it will not go to us. So sending that information leak over XMPP wasn't really an option. So instead, we started looking for a way to make a HTTPS request to us, directly to us instead of a Zoom server. Because this would be a channel where the connection is direct instead of to the server, giving us much more control and making it easier for us to send some weird data through it. And eventually, we found some weird way to do that. Now, the Zoom chat functionality has what's called a marketplace, which are basically bots that you can add to your account that you can then interact with through the Zoom client. And whenever you add a new bot to your account, you do that on a website. Then Zoom will start downloading the image of that bot to display it in the client. And what we noticed is that it sends that message to Zoom, but we can also send that message. And then it takes the domain marketplacecontent.zoom.us. And whatever path was sent by the server for the image is put after it. But as you see, what they send starts with a slash. But what if we started with a dot? Well, then it would start downloading that image from marketplacecontent.zoom.us.computs.nl, which is now certainly a domain under our control. So it would download that image from us instead of the marketplace server of Zoom. So by sending that message, it would initiate an HTTPS connection to us. And then the object that we wanted to use for our leak, we found the TLS1-PRFP key context object, which is in OpenSSL. This is an object that's created during the TLS handshake. To hash some data, I think. But one thing about it that's a bit annoying for us is that they create it, they fill it with data, and then once the handshake is done, they delete it again. So it's deallocated immediately after it was used. So that was a bit tricky. But it was a good target for us, because at the first location in that object is a pointer that we could use for our information leak. If we have that pointer, then we can compute every part of LibCrypto, and we know where everything in memory is. And importantly, that first pointer was not erased if the object was deallocated. So if the object was cleared, it erases some memory, but not this specific pointer. So this is the definition of this object. Not really easy to read here, but at the first slot, there's a pointer into LibCrypto. And because this one was not erased, we could leak it back to us. So we formulated the following plan. We used that vulnerability we had to set up two connections to an HTTPS server of us. And on one of these connections, we close it immediately after the handshake to make sure that these objects are in memory somewhere, although they are deallocated. Now, on the second connection, we request a path of 1,025 bytes, which is a size that fits within the same bucket. And then we use the buffer overflow to override everything up to the first null byte after that string. And then we complete the connection. And then when it sends that path to the server, it will keep reading on after the URL. And then it will also include that memory in the request it sends to us. So I'll try to make that a bit more visual. So this was the URL that we could make it perform a request to. And the way that C stores strings in memory is that it always puts a 0 byte at the end. So in this case, there would be a 0 there to denote the end of the string. And every function that works at strings just knows that it needs to keep reading up to that 0 byte, which means that you don't need to keep track of the length of the strings that you have. So if you would complete this request, it would go reading and then see that 0 there and stop. But if we trigger our vulnerability at this moment before we do this connection, then we start overwriting that data. And if you go all the way to that 0, but not everything afterwards, and then the connection is continued, then it continues reading after that buffer. So that data that's in memory there is now also sent back to us. And this way, we have read some memory that we needed to compute where the entire libcrypto is in memory. Now, this chain was very unreliable. I tried to do this, and then it worked once, and then it didn't work again for at least a day. And then it worked twice in the same day. And we had to get it to work within five minutes. So we really had to improve this to make it work better. But one of the things you could do is set up multiple connections to both servers to basically increase the chances of the objects being in memory at the right places. You could renegotiate the connection, which is a feature in TLS that you can do a handshake on an existing connection again, which is convenient because it was faster to renegotiate and open a new connection. You could trigger the build flow multiple times to increase the chances that we would randomly overwrite the right thing. And eventually, we got it to working reliably, well, good enough for the competition. But really, we spent a lot of time on getting this to work as reliable as we could. Still wasn't perfect, but good enough to actually make it work. And our next step of the exploit is hijacking the control flow, which Dan will talk about. Yeah. So we can somewhat predict the state of the heap. It's not perfect because it's randomized, but we can at least kind of predict that we won't overflow in anything important. We leaked some information about where our libcrypto is in memory. So now we're up to the third step, and then that is actually hijacking the control flow. So making sure that the application executes code that we wanted to execute rather than just continuing on with its normal execution. So how do you do this? Well, we can't modify any of the application codes because that's a whole different memory reason. We can only change data on the heap. So the goal of this change is overwrite something that determines which function will be called. So we could make it to call the function foo, for example, rather than the function bar it's supposed to be calling. So there are a couple of options for this, but the most likely one is to find a function pointer on the heap that we can overwrite. So sometimes on the heap there will be a pointer to a function, and if we could overwrite that pointer and that function pointer would be used, then we could determine which function will get called. The other option is a vtable pointer. Whenever a vtable is being used. A vtable is something specific for C++, but due to the fact that Zoom is written in C++, overwriting a vtable pointer is a valid option as well. The only constraint is that the allocation must be within the same buckets, of course, because otherwise we couldn't overwrite it. So let's look a little bit closer about what a vtable actually is. It's used in C++ for virtual functions, and basically a vtable is a list of function pointers. And every object in memory in C++ starts with a pointer to a vtable. Let me quickly demonstrate this for you. So suppose you have a class dog in C++ and you initiate an object for this. How this actually is stored in memory is at the first offset you have a pointer to the vtable of the object. So it's just a list of pointers to where it can find, for example, the function sound or walk. And right after the vtable pointer, there are all the instance variables. This is how in C++ an object is stored in memory. So this is all on the heap, or at least the vtable pointer and the instance variables. So suppose we could overwrite a vtable pointer and we could find a way to construct our own vtable somewhere in memory, so just a list of function pointers. We could possibly hijack the control flow because if that pointer gets dereferenced to look for a function pointer and we could determine which function that would be, then we have control over the program counter. So look simple, but that was quite difficult because first we had to find an object of the right size that was in our buckets. Then we needed to override that vtable pointer with a memory address we know about and where we can control the data, which is a whole different problem. And we had to make sure that the vtable was actually used. So the first step was, let's find a vtable pointer we can override that is actually being used by the application. So we spent quite some time on this and eventually we found the solution in this sound. Let's hope the sound works. So I'm so tired of this sound. So we call each other daily the whole day in order to generate that sound. So whenever you can request somebody in a meeting, so you can call somebody and whenever that's done, it will create an object of the right size for loading this sound from disk. It's just a audio file, it's on disk and it will load the file, place it in memory in an object with a vtable pointer and then it will play the sound and then it will use that vtable for the close to determine which function to use to close that sound again. It's a sound of, it's only one ring and it will constantly open the file close to file, open the file close to file, et cetera. This was an ideal target for us. It was of the right size, it was being used and we could override that vtable pointer. One problem was we don't know where our data is in memory. It's somewhere on the heap but we don't know where on the heap that is. We only at this point know a address of Lib Crypto. So we needed to make sure that we had some custom data at a well-known place. So we had a theory about this. Zoom is a 32-bit application and that means that the amount of memory that Zoom can use is very limited on Windows. It's only two gigabytes. So we thought, okay, what if we just take a random address in memory, just something which most likely doesn't contain anything? What if we can find a way to exhaust all memory? So what if we can make Zoom allocate some large space and if we do that often enough, at some point, some data that we control will be at that location because there is no other place to put it. So we were working on this theory and seeing from, okay, where can we find some object that we can control, contains our data, so we can test this theory. So one of the things we tried was sending images to the other party by just creating a very large image of 100 megabytes or 20 megabytes and then send that to the other party because Zoom has to show that image to you, so it has to load it in memory and if we send it multiple times, maybe it will get allocated multiple times in memory as well. So that was the theory. Only Zoom has some protection for this. If you send an image, it will show the image if the image is under, I think one kilobyte or 100 kilobytes or something. At least a very small amount and if the image is larger than this, it will first request the user if it wants to download this image. So that was of no use for us because then we required some user interaction. So for this, we used a third vulnerability and the last vulnerability. Zoom also has the possibility to send GIFI images to each other. Those are stored on the server of Zoom. Those are always downloaded and shown no matter the size because they think it's a GIFI with controlled data but we found some path reversal vulnerability on the server of Zoom. So we could actually upload a very large file and then send that file as a GIFI which would make sure that Zoom would download it and display the image regardless of its size. So we used a GIFI image that actually contained our V-table with list of pointers and we just send that image multiple times. We created an image of 25 megabytes and we send that 20 times. That means that 500 megabytes of the two gigabytes is containing our data. This was enough data so that we could predictably say, okay, this random address that will always contain our vague V-table. So we would first send this GIFI link to the other party and then we would call the other party, request them in the meeting and then we would overwrite the V-table pointer that would point to the close function. This is how we actually could hijack the control flow. So now we can control which code the Zoom application will run. So we can now divert it to different parts of the application but what we really want is to make it run our own code instead of Zoom's code. So we want arbitrary remote code execution. So one thing that you might be able to do is to construct a ROP chain but that needs to be on the stack and we have a buffer overflow on the heap. So first let's talk a bit about a ROP chain. So a ROP chain is a way to create a sort of a fake stack to make sure that whenever the application returns it calls another bit of the application. So in this way you can basically string together very small bits of code of the application that already exists to make it do something you want to do. So I cannot introduce new code yet but I can just combine different parts of the code to make it do something that I wanted to do. But we cannot do that yet because we don't control the stack, we only control the heap. So that's why we needed to find something that's known as a stack pivot and what this basically means is that we can replace the location of the stack pointer. So instead of overwriting a stack we just redefine the stack to be somewhere else in memory. But something else that we needed to keep in mind here is that there's a relatively new protection mechanism known as control flow guard. This is yet another protection mechanism against this type of vulnerability because it aims to prevent return-oriented programming. Because whenever you try to call a function pointer the compiler will insert a check that will look at that function pointer and it will try to determine if that's a valid start address of a function. So the reason why that's annoying is that in a rope chain you often just use some very small bits of an existing function but this is no longer possible because you can only start at the start of a function. You cannot just jump in halfway. So there's basically two ways that we could bypass control flow guard. First of all, we could call one function and that's basically it. But we really couldn't find anything that was usable for that. Or we could look at the library that's not compiled at control flow guard. And one thing we noticed is that that libCrypto library that we've been talking about for the information leak did not have control flow guard enabled. Apparently OpenSSL as a weird built system where control flow guard doesn't work for something like that. So that means basically two things. Any function pointer being called within OpenSSL does not have that check. That's not what we wanted to use. It also means that valid start addresses are not known for libCrypto. So we can just jump into any function at any location. So that's what we used here. We found a stack pivot gadget. So basically something that we could use to redefine the stack to be somewhere else which meant that we could execute more by using a rope chain. Then we used the rope chain to make some part of the GIFI image that we sent executable. So it was now data that we sent that was now also allowed to be executed. Though we could also only use the gadgets from libCrypto, of course. Then we could jump to the code included in the GIFI and then it would execute some shell code that we uploaded. Yeah, there was also a bit of difficulty with that because we didn't have the function that we needed. But luckily, we had a function that we could use to look up dynamically the address of that function. So we had to get module handle. We could look where kernel 32 was in memory. We could store that and then get the address of virtual protect. This is the function that we could use to make our code executable. And then we could call it to make the GIFI executable and then Jim do it to start executing it. Now we have a demonstration of the attack. What you see here on the left is a Windows VM and it will show that it's the latest version at this time. On the right you see the output of our exploit and we start by sending all of those messages for the heap grooming. So if you look very carefully at the top, you'll see that the counter is going up. It should go up to about 90, I think. And then after this, we start deleting some of those but keeping that last one to keep that bucket active to have the room that we needed for the rest of our exploit. There's a couple of other steps in here that we didn't talk about to basically make the memory optimal for our exploit. And now it's deleting those messages by deleting the, yeah, to revoke those blocks. Now we start the information leak. We make it connect to our HTTP server and in this case, we were pretty lucky. This needs to run in certain loops but in this case, we were already in the second loop. I think it succeeded. Yes, at the bottom there, it now shows the information leak obtained and it has devised with the addresses of LibCrypto. Now it's sending those GIFs to load the fake fee table and now it should send a call request. I think in this case it sounds only played twice. We don't have it in the recording, sadly. But we're also very lucky that it worked. Oh, thank you. And as a customary, we start the calculator. Many people wonder why is there suddenly a calculator running but it's basically to demonstrate that we can now run any application. It could be anything else. You could install some ransomware, whatever we wanted on this machine. As you can see, we can just also use other commands to basically take over that computer. Now we skipped over a lot of the details. There's quite a lot of work more involved here. But we have a more complete write-up on our website, so if you want to read that, you can visit that link. Or if you have any other questions, then please let us know or come visit us at our tent. Brilliant. Awesome. So if there are any questions, please line up behind the two mics here. Do we have any questions from the internet at all? No questions from the internet. Okay, I have a question. How long did it take you to do the whole process basically? Two months about. Two months. Yeah, it took about two weeks to find the vulnerability and that's basically starting from scratch because we hadn't used Zoom very much. So we basically had to go through all of the functions first. And in two weeks, we had the vulnerability. And then one and a half months to develop the exploit that you used eventually for the competition. And then we were out of time. Yeah, we had to do it with all the ads. The two one and a half months was also because we have never done memory corruption on Windows. We've done a lot of it on macOS and Linux, but this was the first on Windows, so we had to learn all the Windows internals as well. So yeah, that's part of this. Yeah, awesome. So I think we've got one here. Go ahead. Okay, so first of all, congratulations. The work is amazing. Thank you. My question is about the path traversal actually. How did you know that the image that you uploaded on Zoom and the GIFs are on the same place? It was on the same host name. So, okay. Same domain name. Yeah, it was false.zoom.us. And because we were intercepting all traffic, we could easily see that it was very similar, but it was on a different path. I don't really remember the full path. But by sending something with some dotted flashes, it would think it's a GIFy, but then still download it and show it. Okay. And my second question is, how did you know that you will have enough, let's say, data after that null byte from the link, if you remember? So if the right data would be behind the null byte? Yeah. So basically, because you can meet another null byte just after a little bit of data. So how did you know that would be enough? Yeah, it would only work if they were right next to each other. Yeah. Which is also why it took a long time. So we had to retry quite often. So yeah, it might be that there's something else in memory, then it doesn't work. But if we're lucky, and basically the two things are, or basically there need to be three things in memory adjacent to each other. So if we get lucky and that's the case, then it worked and we got the information. But because we did all of the heap grooming, we could safely retry a couple of times until it would work. The heap grooming took anywhere between 30 seconds or five minutes, depending if we got lucky early in the stage or we have to retry multiple times. Yeah. And a funny question. Holden, there's a question at the back. Can we ask the question at the back? Yeah. Will this exploit continue to work if you have a different language, a movie language, or a locale, let's say? I don't think so. I don't see anything that is locale specific in the exploit. We haven't tried, but I think it will work. So we're not safe in Europe when we're not using English? No, no. And another question here as well. First of all, this exploit is amazing, so thank you for explaining it to us. Second of all, it took you guys two months to work this all out. For me, I'm a very novice hacker, I would say. And where would you say to start learning about this sort of exploitation? So just come work as a pentester with us and we'll teach you everything. And in a couple of years, you can do this. I'm working as a pentester elsewhere, so. But there are some documentation online about memory corruption, so like the basics, some challenge websites that offer some playground to do this kind of stuff. And one thing that I often suggest is to look at IoT devices because often exploitation is simpler there because you don't have all of these security mechanisms. Or maybe you just have a couple of them making it easier to get started with something. Okay, thank you. Thanks for the question. One last, anything been zoomed in from the Internet? Okay, great. Thank you very much. So that was the talk. Let's thank our speakers again. Thank you.