 So I think before we start the presentation, I want to set a little bit of a context here and set a little bit of background. My colleague Vivek is going to make the main presentation. I am Praveen, the Chief Technology Officer of Airtight Networks. So the context or the background of what we are going to talk about here is the whole community here, I think, knows that web has been dead for many, many years. But recently, about three months ago, a very interesting or a thought-provoking idea was proposed. And the idea suggested that if you inject some shaft frames in the air, it is possible to confuse current generation web cracking tools. So in a sense, kind of a new war has started that a shaffer can win over a cracking tool. And then the question that we want to pose here, that in this war between shaffer and a cracker, who's going to eventually win? And obviously, there's going to be some ups and downs one time, one party will win, another time another party will win. So we've sort of done some thought experiment along this war game. And we are here to share our findings or the results that we have arrived at. And it's important to kind of mention that we are here presenting results not specifically targeting at any specific implementation of a shaffing approach, but rather this general approach of injecting shaft frames and trying to confuse a web cracker. And we are going to show you all possible ways the shaft can be generated, and each one of those possibilities, how a cracking tool can evolve to beat the shaffer all the time. So before we get into the specifics of the presentation, it's important to mention, as I said. And during the presentation, also, we'll be using terms like web cloaking, web shaffing, or web masking interchangeably. And since this idea was proposed, these terms have been used in the industry as a whole interchangeably. So our reference of these terms should be treated as reference to the generic shaffing approach not to any specific implementation of it, which may be out there. So with that kind of a context and background, I want to invite Vivek, who is the youngest member of our R&D team. And I would be very honored if Deepak Gupta, who is one of the main brains behind what you're going to be presenting, could be here. Unfortunately, he couldn't make it. So that's why I am kind of substituting for him. So please welcome Vivek, and we'll get started. Thanks. Thanks, Pruey. I am Vivek, and I work as a security researcher at Airtight Networks. This is my first time both in America and, of course, in DEF CON. And we've tried to do a good job here, and I hope you guys are going to enjoy this presentation. Thanks. So with that, let me briefly run you through the talk outline. First, I'll be talking about the evolution of web cracking and how web shuffling is actually going to stop the current generation of web cracking tools from failing. Also, we are going to discuss some of the techniques which the shaffer used in order to send out his frames and also various countermeasures which can be used to separate those shafts and once again allow air crack to crack. Also, then we'll discuss about some of the implementation problems with web shuffling, and then try and conclude what we think about the whole web shuffling approach. And then the house opens up for Q&A. So evolution of web cracking. I'm sure almost everybody here must have cracked web once, and I really do not want to run you through the whole refresher. So very simply put, web cracking is the process of using various statistical attacks which were discovered against web from the last seven years or so to derive the web key. And let's quickly look at the historical evolution of the various cracks which occurred in web. So the first crack was as early as 2001, and the first cryptographic vulnerability was discovered also in 2001 in the celebrated FMS paper, which was later followed by Corek in 2004, which decreased the complexity of web cracking. And finally, recently, the PTW guys sort of extrapolated Adria's clean conditions and have really brought down web cracking to the extent that you can crack web within five minutes. So what started with 5 million packets at the FMS time has now come down to around 60,000 to 90,000 packets required for web cracking. But this hasn't really stopped people from trying to save web. And people have used these band-aid approaches saying, OK, make it 128. That's going to make it more tough to crack. And suppress weak IVs, and so on and so forth, what not. So what we are here to discuss is the latest development of such a band-aid approach. And that is called web shaft frame insertion. So the question we are going to try and answer in this presentation is the shuffling approach just yet another band-aid approach, or is it really going to hold water? So basically, it's web shuffling against web cracking. And we'll see who's going to win. Now, so what is web shuffling? Very simply, web shuffling is a technique of mixing spoofed web-encrypted frames, also called shaft, and to make them as indistinguishable from the real frames as possible. What's going to happen is the current generation of web cracking tools are going to use the shaft trace, which contains both the real authorized network's web data, along with the shaft data, gets confused, and is going to fail. So what are these shaft packets? Can any web-encrypted packet in reality qualify as shaft? The answer is no. It's because only those web-encrypted packets who have a property called a weak IV are really going to influence the cryptographic process. And only those can cause a bias to either force, air crack, or any other cracking tool to converge to the wrong key or to diverge to no solution and fail. So let's quickly look at an example of shuffling. So actually, these traces were generated. And I have these offline traces through which I'll show you how web shuffling works and the countermeasures. So first, let's look at how shuffling works. I'm using the latest backtrack 2.0 final cut, and pretty much I'm going to use the default air crack version which comes along with that. I'm also going to give air crack the hint that the key is a 40-bit key. And this trace called one key shaft contains both web-encrypted packets of the authorized network, as well as the shaft packets. So let's see what's going to happen. So if you notice, air crack actually exists without trying. Let's try and cajole air crack into working a little bit more by increasing the fudge factor to the maximum of 32. It tries, but once again within four seconds it pretty much exists. So going back to the presentation, how did this really work? I can assure you and we'll see later that there are enough web-encrypted data packets in that shaft trace to be able to crack the web key. But what actually went wrong? The thing one has to realize is that the current generation of web-cracking tools typically trust whatever is there over the air. And if you think about it as a hacking and cracking tool, why should we trust what we see, right? And that's where air crack and all the other crackers are going wrong. And web-shafing pretty much exploits that trust, that trust that whatever air crack is using into its statistical processes is right. So we're going to be using air crack for the rest of our discussion, pretty much as a benchmark tool because from what we've observed it's also the most reliable as well as popular, implements all these statistical attacks pretty much exhaustively. And the good thing is the creator of air crack, Thomas, is also here at Defcon. Thomas, are you here? You can raise your hand. There he is. So for every web key which you cracked, you need to thank him. Moving on, let's try and understand how a web-shafing approach can actually try and fool air crack. So this is actually a very complicated slide of how air crack works. But putting it very simply, what happens is when you give a trace file to air crack, it's going to read that trace file, list out all the unique IVs along with the first two encrypted bytes and create a list out of it, right? Then we go on to the iterative approach of cracking web. So I'll be discussing basically from the FMS and Corek perspective. So basically what happens is if you want to crack the web key to crack the B-th byte of the key, we assume that the last B-1 bytes of the key have been cracked. And of course we start with B is equal to zero, that is absolute zero knowledge about the real web key. From there on, for every unique IV along with the first two encrypted bytes which we have in our list, we'll go ahead and try and see if that is going to match one or more of the Corek and FMS conditions. If it matches, then we're going to get a possibility of that byte from this match. So there are equations which are going to be used to define what that possibility is going to be and depending upon which one of these statistical conditions got matched, we will award an according score. So as most of you might be knowing, the statistical conditions pretty much revolved between five to 13% with respect to what the real web key byte might be. So once we've done this, we are going to use the most possible byte for that value as the one having the highest voter score. And once we've done that for all the four bytes, let's say for a five byte key, then for the fifth byte we're going to be doing brute forcing. So by default, the last byte is cracked by brute force and in that you want to use something called the Fudge Factor in which we decide also which possibilities to include for every byte which we've encountered till now. And finally in the brute force technique, when you're trying to crack the key and you've arrived at a possible key, you're going to verify that against a list of 32 IVs if you're able to crack it or not. So now let's try and understand what are the attack points in this logic? The first attack point is ignoring the duplicates. The second is playing with the actual statistical calculations. The third is playing around with the Fudge Factor and the final one is beating the verification process. So let's look at how this can be done. So the very first one, ignoring duplicates or eliminating legitimate IVs. See the Shaffer can have the real network's web key and exhaustively enumerate all the weak IVs for that network key, right? And then he's going to send out all these weak IVs and randomize the first two encrypted bytes and send it over the air again and again. Aircrack is going to use that, encounter all those weak IVs at one go and from there on any weak IV which comes from the legitimate network's traffic is going to be ignored, right? You have this unique underscore IV check for people who looked into the code. So pretty much we end up masking the unique IVs coming in from the authorized network. The second is influencing the voting logic itself and this is done basically by concocting a weak IV along with two encrypted bytes which are going to match one or more of the statistical conditions. Once this happened, you're pretty much going to be influencing the voting logic. After that, beating the fudge factor. So the way to beat the fudge factor is to create such a large bias for the first possibility that even if you divide it by the largest fudge factor which is 32, the quotient of that division is going to be greater than the legitimate key bytes vote. When this happens, pretty much the legitimate key byte even if it is a possibility will not be included for brute forcing. And final is beating the verification step. So even if two or more of the 32 IVs which are used for verification contain shaft, then air crack will exit even if it's going to have the real key. So can air crack be made smarter or should we end this talk? And the answer is pretty much yes. See what is the fundamental problem here is that air crack trusts what it sees. And what we are going to do is that give air crack some sort of a microscopic view into what's happening and that we accomplish by using various filters. So using these filters, we'll try and separate the shaft from the grain or the noise from the real signal, right? Or the web data from the shaft. So now we'll look at the various techniques through which shaft can be produced and the countermeasures which we can use against them and filter shaft. Okay, so as this is Las Vegas we sort of decided to play a game of roulette. What we are going to do is that we are going to start off with techniques from the very naive to the most sophisticated. And for every shafting technique we are going to bet on one piece of the emperor's clothing. If we can actually zero down upon a filter which is going to filter out that shaft then we are going to remove that piece of clothing from the emperor. So let's see what happens. The shaffer versus the cracker. The very first approach is actually the most naive as well and that is injecting random frames. So what happens here is that the shaffer can actually go ahead, send out any web data packet. Is this really going to affect air cracks performance? Most probably not. Why? As we've already discussed only those packets which have weak IVs will influence the statistical analysis. And if you randomly just generate any packet given that the weak IV distribution is very small and chunked the probability that you'll actually send out a large number of weak IV packets is very, very less. So air crack NG default itself has a lot of noise tolerance and we've tried this technique out in the lab and you see that pretty much air crack can crack the key. And off goes the necklace. The next technique is going to be weak IV frames but with fixed size. So we've already discussed that weak IV frames is what is going to influence all these statistical calculations, right? But why fixed size? We're going to look into some technique later which is also going to use this property called fingerprinting. The reason is whenever a Vips is going to deploy any sort of a prevention technique there has to be some fingerprint. And having the frame size fixed is a very easy fingerprint. Note, this technique is one of the more naive techniques. We'll talk about more sophisticated ones just in a little bit more. So how can we beat this? Do you need to write your own tool? I think not. Just go ahead, take the trace of all the weak IV packets if you actually observe a lot of weak IVs with fixed size. Just go ahead and write your own simple wire shark filter to weed out by frame size. So any frame size filter can actually eliminate this sort of trivial and elementary shaft. Now let's look into some technique which is slightly more advanced. And that is shuffling using a single key. So what happens here is that the shuffler is going to have his own unique key through which he's going to find out all possible weak IVs. Once he does that, he's going to send out all these weak IVs along with the first two encrypted bytes in order to match one or more of these statistical conditions. So all the tools, pretty much Air Crack and every other tool which is currently existing are going to blindly plug in these weak IVs into their statistical calculations and either converge to the wrong key or will diverge without a solution and just exit as the case we saw in our demo, right? So whether it's going to succeed or not pretty much depends on the verification process. So how can we actually beat a shaffer who's using a single cryptographic web key to send out shaft? Actually the beauty is that Air Crack default can do it but it requires a very trivial modification. And that's what we call Air Crack visual inspection. So what is visual inspection? Visual inspection is the process through which we are going to weed out the anomalies. And when you say anomaly, you have to be very sure about what the normal behavior is. So what we observe over and over again, you know, hundreds of experiments is that, let's say we have a trace of around 300,000 weak IVs, then the maximum vote for any byte rarely goes about 250. It's more like an empirical observation. Pretty much it's going to be lesser than 250. So if you really see a huge anomaly in the voting process, you can weed out some of those possibilities. And that is what the visual inspection technique is all about. See, because the shaffer has to force the cracker to converge to the wrong key as soon as possible, right? He's going to be sending out these weak IV packets. And the distribution of votes which is going to be caused by these weak IVs is going to be very visible when you look at Air Crack. The problem right now is Air Crack just goes ahead and proceeds. What if we could matrix style freeze Air Crack when it is doing the cracking at every key byte and look at the voting distribution. And if we can identify any anomaly there, weed that out. So let's look at a demo. I'm going to be using the same trace which the Air Crack default failed to crack. So modified Air Crack trivially to actually stop at every key byte. And I think this is included with the CD as well. So now we are about to crack the zero byte of the key. And if you noticed, Air Crack has this frozen over there. And you can see the voting distribution. If you look at the voting distribution, what you find is that the highest possibility, which is 41, has a huge vote of 1580. This happened because the Shaffer sent out weak IVs destined for that byte to be 41. So the attacker key was actually 41, 42, 43, 44, 45, right? So you'll notice some of these things over and over again. So let's say we assume that let's just go ahead with the cracking process and see how Air Crack fails. So Air Crack is going to ask me, should I go ahead with that value or some other new value? And I'm going to say, let's just continue with 41. And if you notice, now for the second byte, the highest vote comes for 42, which is the second byte of the attacker's key. So by going ahead with this whole calculation, you'll actually end up either converging to the attacker key or diverging to no solution. Why did this happen? Because Air Crack tried to verify against the Shaff trace and the IVs which had picked up for verification actually had a couple of legitimate packets which were more than two because of which the key typically could not be used to decrypt it. So now let's look at how we can use this visual inspection technique to weed out Shaff. So what we are going to do here is 41, absolute anomaly, absolute garbage. So let's go with the next value, which is 67. Once again, one C, absolute anomaly. The reason why one C appears are not 42 because now we are simulating the KSA to the next step assuming that the value 41 is the zeroth byte of the key. Once again, 28 absolute anomaly. Let's use the next value, 30. And if you notice now, Air Crack is able to crack that same trace and get you the key, right? So a Shaffer using a single key, all he can do is sort of change the bias to such an extent that Air Crack is going to exit but trivial modifications can do it. I'm doing visual inspection by hand and sort of break pointing at every level just to explain this concept. But this can be included. Note that this is different from the Fudge factor. Fudge factor is going to divide 1580, the highest vote 41 divided by 32, which is still not going to include 69 or some of these values because of which it exits. So what are the strength and weaknesses of this technique? This can crack key in many cases, especially in the single key and the multiple Shaff key cases. We'll discuss the multiple one in the next slide. The weaknesses, it's not going to work for random keys and we look at that as well. The point is we are going to present you a wide gamut of techniques and all these techniques have an overlap. Visual inspection can work for the next one and this one, so on and so forth. Sorry. In the end, we'll try to assimilate this together and show you in the form of a chart how all these techniques overlap. So the next type of Shaff is using multiple keys. So now the Shaffer, let's say has four, five, six keys, generates all the weak IVs and sends them out, right? So what's going to happen and how can we beat such kind of a Shaffing is using sequence number analysis. How does this work? So the sequence number as all of us know is part of the MAC header and is present in all the management and data packets. The point to note is that for every source there is going to be a distinct sequence number pattern which you can see and pretty much all the MAC spoofing algorithms, detection algorithms, which are currently used in VIPs rely on this form of detection to find an attacker spoofing a MAC. So using this, we can pretty much filter out the Shaff and the real data traffic. Just a few hours before we submitted this presentation, we also came across Joshua Wright's blog in which he advocates this as a technique. I'm sure most of you know about Joshua. He's a well-known security researcher in the wireless community. So how does the MAC spoofing detection algorithm using sequence number work? The pseudo code is pretty simple. Given a trace for a device in the trace, find the first sequence number from there on for every sequence number which you encounter for the same device. Take the difference between the current and the previous. If it is going to be lesser than a threshold, go ahead, accept that packet. If it's not going to be lesser than the threshold, discard the packet. The threshold pretty much is going to depend upon how lossy your trace file is going to be. I'll actually do a demonstration of this technique along with the next one together. So how can Shaff insertion sort of be made more sophisticated is by using random keys. So now the attacker is taking the attack to a totally different level. He's going to take a single key which he's going to randomly generate, find out a couple of weak IVs, inject those packets, then he's going to generate another random key and continue the process. So is it possible to beat the random key case using a technique? The technique which we propose as a filter is using initialization vector analysis. Now, beautiful fact about the IV is that it also follows a trend just like the sequence number. Now, most of you must be having this query that IVs might also be random. They need not be sequential. But in the context of web shaffing, because it is intended to protect legacy clients and legacy devices, you'll note that all these devices invariably have incrementally increasing IVs as well. That is why we can discuss the IV-based filter for web shaffing and against it. So Joshua Wright also advocates this technique on his blog and the pseudocode is absolutely simple. Only point to remember is the IV is going to be there only in data packets and not in management packets. So let's look at a demonstration of this technique. So just to not meddle around with the code of Aircrack or any other cracker, I have written a separate utility which is going to run through a trace and filter it out based upon sequence number and IVs. Both of them together are very robust and that's what you're going to do. But before that, let's quickly look at how the shaft trace is going to look like for the four key case. So let's once again run the default Aircrack. Actually, we need to run visual inspection so that we can freeze and look at the distribution. So this trace contains all the legitimate data networks traffic along with shaft which was generated using four keys. It's a slightly larger trace, right? So now look at the voting pattern. If you notice 63, 31, 41 and 45, the first four possibilities have a huge anomaly in the votes which they have got, right? So let's exit this here and now let's use our sequence number filter to go ahead and filter this trace out. We'll write the absolute, I would say, the trace after filtering into filter.cap. Actually the minus and 64 confused it. That was Aircrack's command line argument and not for the sequence number IV filter. It's a separate utility. So the filters are being initialized and there are a lot of debugs which pretty much you can take a look later. So now what we have in the same directory is a filter.cap trace which was generated using the four key shaft trace. Now just to play it even, let's fire AircrackNG default once again which comes with backtrack on this new filtered trace. So now if you notice all that anomaly which was caused in those huge votes pretty much don't appear on your screen, right? And Aircrack goes ahead with its cracking process and is trying to crack the web key, right? Same trace, we use the sequence number and IV filter and after that Aircrack is able to crack it. So this shows that a sequence number and IV together are pretty robust. Now let's look at the random key case, right? For every, let's actually run Aircrack visual inspection and let's look at how the random key generated shaft actually looks like. So this contains both the real network's data along with shaft which was generated using random keys and if you notice nothing is actually predictable and there are also a lot of negative values which are coming in. Anybody ever notice negative values while they cracked web? For the voting, the reason why this is happening is one of the correct conditions which are there is an error correcting condition which has a negative vote. So if you generate things randomly you do not have a control over what's going to happen. But the beauty is if you notice 67 which is G and the very first byte of the web key does not appear in any of the possibilities, right? So the attacker does not have control over what is actually going to come up as the final distribution but at the very least he's going to end up masking the real network's key byte using random keys. So once again what we've seen is that we've conducted a lot of experiments and the sequence number and IV filter also is going to protect against such sort of a shuffling. So let's store the trace in filter to dot cap and once again what I'm going to do is let's first actually run Aircrack, the default version on the random key shaft. Just to once again show that look, this is the shaft which Aircrack could not crack by default. I think it runs for around 15 to 20 seconds and if you notice the voting pattern has negative values suddenly very large values are happening pretty much unpredictable because what you assume at the first step to be let's say the zeroth byte of the key is going to decide how you're going to simulate the rest of the KSA. So Aircrack exits. Now let's give the same default Aircrack, the filtered trace, keys pretty much found, right? So with a lot of experimentation, we basically had two groups, one playing the shaffer and one the cracker. We saw that sequence number and IV together is very, very robust. The strength is that this can work against almost all the different kinds of shaft, single, multiple and random. At the very same time because it is a passive and an offline technique it is more preferred and an independent shaft generator, shaft separator as we've just made can be easily built. The weakness of course is that IVs may not be sequential for all devices, it can be random but the point remains that for all legacy devices which web shaffing is recommended to basically use with in order to protect web have sequential IVs. So now moving on, pretty much if you think about it the way in which weak IVs can be generated is using a single, multiple or random key, right? And because we were using the sequence number and IV analysis, we decided to take it a step further. So now we are going to talk about an approach which currently is not implemented anywhere and most likely will never be implemented but more from a completeness perspective it is important to discuss it. So this approach is somehow making the frames indistinguishable for the AP at least as far as the sequence number and IV is concerned. The only way you can do this if you have web shaffing as a feature in the AP itself, right? Only then can you not have any collisions if you're trying to spoof it. So if this is somehow achieved, right? How can we beat this? The answer lies in a filter called active frame replay. Now what is this? The problem with web is it's not just the cryptographic vulnerabilities, you know which have sort of caused all those cracks. It's also other things such as web is vulnerable to replay attack as well. So you bandage one crack. We can pretty much poke a hole from the other one. So the basic idea is simple. Web has no replay protection. Second of all, the web Mac header can be modified trivially and actually retransmitted over the air. So what we are going to use in this technique is take every packet which we have in our corrupted trace, change the destination address to the broadcast or a chosen multicast address and send it back to the AP, right? Now what's going to happen is if the data if the data packet which we send belongs to the authorized network, the AP is going to replay it because first of all, the data packet will be decrypted correctly. And then because it is decrypted, the AP is going to look at the destination and forward it to that. As we are using a broadcast and a multicast destination it has to be once again sent over the air. Now let's say the AP is going to encounter a shaft data packet which we inserted using the same technique. The AP will fail to decrypt it and simply discard it. So what's going to happen is we're going to see the packets which we sent, we're going to replay them and only the real networks data packets will be replayed. So the big question, how can we identify the packets which we just replayed, right? Because the AP is going to actually encrypt them with a different IV. The answer is simple. What we could do is we could randomly generate a multicast address and use that as a destination and then keep it long enough in memory, let's say two or three milliseconds and wait and see if we are actually seeing a data packet from the AP which has the same destination multicast address. Also we can use the frame size as another parameter. What we've noticed is using both of these together technique pretty much is very robust and works very well. Also 100% shaft separation is possible. The reason is very simple. This is how the protocol works. You cannot change that. So there are two reasons why I cannot do an active replay live here. One is it's going to take around 30 to 40 minutes and the second is this being DEF CON. I'm sure somebody's going to endorse me. So the technique works and I think theoretically it's already been used in the tool called chop chop which Corek released. So what we notice is that for any trace which you have used the active replay technique, what you get back is going to be totally pure data traffic which originated from the authorized network. There's also a good thing about this technique and that is if you send a good data packet to the AP it's going to give you a new data packet as a bonus and that's going to have a new IV in it. So think about it, right? Good data packet, get back another data packet as a bonus. This technique is going to work for any kind of shaft. Also 100% shaft separation is possible at the very same time because we are totally oblivious and really do not care about what the sequence number or the initialization vector was for the device which returns. So pretty much we can use it and this can be done in real time as well. A lot of you guys must have already used tools like pcap to air. So that can be trivially modified to support this. The weaknesses of course is that this is not a passive approach. So you have to be there very close to the network in order to be able to replay these packets. The second thing is if, let's suppose the authentication which the AP has is open, you can spoof an association, you can spoof an authentication. If there is a shared key authentication in place and if you're unlucky enough not to have a shared key authentication transaction in your trace file, you'll have to wait for a client to be associated. So now let's move on to the final technique and if you notice the emperor is looking very worried. So the final sophistication could actually be that pretty much using a super secret magic potion. Absolutely use something I mean which nobody can even think of totally unrealistic but still some way in which shaft could be generated some other way. For that the replay technique is going to work because that's where the protocol works. Another important point to note here is finger printing, right? Why is finger printing so important? See the implementation of shuffling dictates that you have to have an identifiable recognizable fingerprint in the packets which you send. Just think about it, there could be two whips which have an overlap region and if they do not identify the shaft packets they send they're going to shaft each other's packets. So fingerprint is absolutely mandatory for this technique. So how can we find a usable fingerprint? Now this is where it is implementation specific. So what we could do is check the packet header for various abnormalities, duration field, you know some other field where some value is there. Also the packet could be a fixed length. Something could be prepended or appended to the packet. I mean there are a lot of possibilities. The point to understand here is finger printing will be there and it's very easy to recognize a fingerprint. Maximum it could take is a couple of hours maybe a day or a week but when that happens somebody's going to release a fingerprint and all guys who actually could collect a trace with such a shuffling deployment can go ahead and decrypt the entire case by weeding out packets which exhibit that fingerprint. So fingerprinting is a real powerful technique. Okay, so pretty much we've covered the nine yards from the very naive to the most sophisticated. So who'd like to see that? I'm sure not many. Okay, so now let's talk about the overlapping of the various counter measures. The point to understand here is that when I say sequence number and IV that does not mean it only applies for the technique I used it as a filter against. There is a lot of overlap and if you notice for every possible way in which shaft can be generated there are at least two techniques which are real and feasible even though the shuffling technique might be absolutely impractical, right? So pretty much the odds are more towards the cracker at this moment. So what are the other implementation problems which are there with web shuffling? There are a lot of implementation issues. The first thing to understand is that web cracking is absolutely passive, right? So the shuffler has to run 24, 7, 365 days of the year. Even if he's going to stop for five minutes it's enough to crack the web key. The PTW attack typically just demonstrates that. Also the point to understand here is that shuffling has to be done exhaustively for all devices across all channels, clients and APs included. Even if you're going to miss out one single client the attacker can simply use that client's traffic and go ahead and crack the web key. He might take slightly longer but the point remains he's going to crack the web key. At the very same time if you want a real total reliable confusion then you should have dedicated devices on all channels which just do nothing but shuffling. There are other things as well. From a game theory perspective what's happening is that the shuffler has to win always and the attacker only once. Why is that? First of all the shuffler has no knowledge about when the trace is being collected because of which he's going to shaft over and over again. The attacker has to collect a trace and let's say he succeeds in cracking the web key with just one single trace then from there on he can decrypt all the data packets and separate shaft because he has the legitimate key. How can he do that with the legitimate key? Decrypt the packets, check with the ICV. If there's a match it's the real data packet. If not it's going to be shaft and weed it out. So once the attacker wins web shuffling first of all is going to be absolutely oblivious of the fact that someone managed to crack web because it's a passive technique. So the shuffler is just going to keep on running and not really understand what's happened. So the odds are pretty much stacked against the shuffler. And the odds are more for the attacker. So any prevention technique, which is going to be used against any attack should be such that the odds are for the prevention techniques. There is a higher probability for the prevention technique to win and not the attacker who's trying to penetrate into the network. So pretty much even from a game theory perspective web shuffling is at a great disadvantage. Also let's talk about increasing the sophistication. Let's say you have this technology deployed somewhere and somehow somebody manages to crack it, right? Using some of the techniques which we've shown or maybe some other technique. From there on, the guys first of all going to decrypt all the trace files at the very same time till the time the shuffler does not release a new patch all those networks are going to be absolutely unprotected, right? So increasing sophistication of the attack is very simple. Attacker can go offline and do everything, but increasing the sophistication from a shuffler perspective is very, very tough. So before we look at the final verdict of web shuffling, actually, let's actually go through the final verdict first. So the point is that shuffling is yet another bandaid approach, right? Web has a lot of cracks. We've tried to bandage some of them. There's a new technique and unfortunately this also cannot hold water. And even if shaft frames are pretty much made indistinguishable by an oracle, I think that's the magic portion case which we talk about. Still then, we can crack web. Web has so many other vulnerabilities. You can sort of poke here and there and get the job done. So the final verdict on web shuffling, it was indeed too good to be true. Now think about it. Does web shuffling actually targeting the cryptographic problems which are there in the RC4 implementation of web? Is it a patch for that? Absolutely not. And that is where the real problem is. That is why the web key gets cracked is because there's a problem with the cryptography in which it is implemented. All web shuffling is trying to do is obscure those problems and obscure those weak IVs, so that to make it very tough for any cracker to be able to recognize that. So it is just another attempt at security through obscurity. And as we all typically know that security through obscurity is a really bad idea. The lesson of this whole demonstration is very simple. Web is broken. Web was pretty much broken. 2001, the whole thing started. 2007, under five minutes. Everybody knows that. People want to migrate to WPA. Web is broken and it will remain broken, period. If you really want to protect your networks, go ahead and migrate to WPA and WPA too. So we decided to spice this up with a little challenge of ours as well. If somebody believes that they have a web shuffling implementation which works differently and then we'll provide them a very simple demo setup, we'll have an access point. Along with that, we'll have two clients and they can get their shuffler, try and protect that network which we deploy and we'll crack the key within 72 hours. Well, that's it. I think we have some time for the Q&A. Thank you. Thank you. I'll also be there in the Q&A room after this. Yeah. I think vendors are in the process of implementing shuffling. That's what the news is, according to Network World. But I think, Praveen, would you like to take that? Actually, we are not here to comment specifically on any vendor implementation. We are just here to mention that this approach doesn't work. And I think you can do your own investigation and find it out. Because I think we don't want to target any specific vendor. It's just a logical discussion that this approach is not going to work. Yes. So for everybody, the point he's making is that, actually, we could also see some sort of a difference in the RSSI level from the real client as well as from the shuffler. And it's actually a very good point. And if you read Joshua's blog, he's also made that point. The number of filters which we can deploy and what we showed right now is not exhaustive. RSSI is one very good filter which you can use because definitely the power radiation level for the client is going to be different from the shuffler because there are different locations. And the attacker can always change his location from time to time. Good question. Thanks. Actually, just one comment I'll make here. The reason why we didn't include that, because that filter is not absolutely deterministic. I mean, there are cases where that, yes, yes. But certainly, one more direction where you can start filter the package, yes. Right. Right. Right. Right. Agreed. Right. Right. So actually, just to sort of summarize what he's putting here, is that, let's say, broadcast and multicast forwarding back to the client is prevented. Right. Very simply put. So in the PTW paper, right, the way in which he talks about to generate that initial art request, there is some interesting property about it. Let's say there is a legitimate client which is communicating. And you send a de-auth to that client. What happens is that the client is going to authenticate and associate again. And after that, the client is going to send out a couple of art requests. Right. One is definitely going to be for the default gateway. Right. So now what we do is when you actually, that art request goes through, the reply which comes from the default gateway is unicast for the client. Right. So what happens is take the very first 10 packets which the client sent after the de-auth, just replay them over and over again. You'll see a lot of unicast replies coming in from the default gateway. And pretty much because of the fact that the replies are unicast, any multicast or broadcast-based filtering cannot work against it. We've actually tried this out in the lab. That's an interesting point, though. Rick? OK. OK. Yeah. Initialization vector. Yeah. So what happens is for every web is a per packet encryption mechanism scheme so that you really do not want to lose track of where you are in the key stream. So what we do is it's like a cryptographic nuance or a salt which is prepended to the secret key, right? And that is actually used for the KSA and later the PRGA, which is the pseudo-random number generation. So it basically makes sure that every packet is going to have a different key stream because if you use the same web key to generate the key stream, the key stream is always same and that's going to open up for various plain text based attacks where you know the plain text, you know the ciphertext, you do a ZOR between them, you get the key stream and from there on for every packet which you see, you can go ahead and decrypt that packet. That's why we use the initialization vector or the IV. OK. So we'll be there in the Q&A room if anybody has any other questions. Thank you.