 Thank you for the introduction, so this is joint work together with Robert Gilliam, David Wang, Adi Shamir, Daniel Genkin, and Jovalje Rom, which are supposed to be here at least, and we're going to talk in about 50 minutes, and the very long and elaborate title is The Nine Lives of Black and Blacker's Cat, and it's a new cash attack against TLS implementations using a very, very old, as we already heard today, padding Oracle. Okay, so what we're going to basically talk about is a little bit of a background, how we can attack TLS and do Damgut attack, a new method that we have to paralyze the original Black and Blacker attack, and then talk a little bit about how we actually, what were the vulnerabilities for the cash attacks, and hopefully some conclusion in the end. So I don't feel that we really need to introduce TLS in this session, but I'll still do it. It's probably the most widely used cryptographic protocol in the world. It provides communication security for many purposes, maybe the most used one is for HTTPS communication on the Internet, and it has, we could say, two main parts, the TLS handshake, which is used to authenticate the different parties and to do a secure kick change, and the TLS record learn that actually encrypts the data, and it supports cryptographic agility and using different Cypher suites, which basically says that if we learn over the years that some kind of Cypher suite that we're using is vulnerable, then we can just deprecate, stop using it, and move to newer ones, and as we already seen today, and we'll see here, in theory, it might work, in practice, theory doesn't work, and cryptographic agility also help us, mainly by allowing the support for every possible vulnerable Cypher suite that has ever been invented. So this is basically the layers of TLS, and we're going to talk about the handshake protocol, which is circled, and we're going to talk specifically about RSA kick change in TLS, and so RSA kick change uses the infamous PKCS number one, version 1.5, padding scheme, that we're going to hear a lot about it in the rest of the talk, and it was once, maybe the most popular TLS kick change option that was used almost by 100% of the TLS connection. However, it has a very, very, very long history of practical implemented attacks, and history, we're talking about a paper that was published last week, the latest one, and I feel that this history is going to continue to the future, but we already feel that there's a large consensus that RSA kick change is a bad thing, and in part of all of the implementation attacks, it also doesn't support for secrecy, which is also a main issue, but still, maybe the last time I checked it was in the end of 2019, it's still widely used, about 6% of the connections on the internet still use RSA kick change, and there are much better alternatives that we can use right now, but it's still supported for maybe the most dangerous thing in cryptography, which is backward compatibility. We still need to support those old machines that doesn't know how to use anything else, and this, as we'll see, puts the whole ecosystem of the internet in risk. Okay, so if we take the large overview of what we did in the paper, we tested nine different implementations for vulnerability to cache attacks on RSA paddings, and out of those nine, there were two, boring SSL and VSSL, which were secure, we couldn't manage to find any vulnerabilities, and all of the rest had multiple types of vulnerabilities, and we'll talk a bit about the different types later on. Okay, so for now, you'll believe me that we're able to do those cache attacks, and then we claim, okay, we broke 6% of the internet, and then you can say, okay, maybe you are in the other 94%, so why do you care about this paper? And the main thing that we want to claim is that we show that it might be feasible to do a man in the middle of a hungry attack, and then using our parallelization technique for this attack, and assuming that we can do this cache attack against multiple servers, for example, large companies, Facebook, Amazon, whatever, have multiple servers to support a large bandwidth, if you can attack them in parallel, then we can do this number attack, we use beast attack to boost the success probability, and then we claim that we are able to break 100% of the connections that use this vulnerable implementation. Okay, so we'll try to explain what we did here. So this is RSA encryption, I assume most of you have seen it before, but this is a nice map, we have prime numbers, we have RSA modulus, we have secret keys, and we can do encryption with that, but the question is how we can use this nice map to actually encrypt real-world data, and there are several real-world problems that we need to overcome in order to really use this. So we'll give a few examples, the first one, let's assume we use a small public exponent, basically it was something that was actually used, and we want to encrypt the number 1,000, and we want to be secure, and today the standard is something around 2,048-bit RSA keys, so we have generated a very secure RSA modulus. Okay, so we have this problem, if we take 1,000 RSA to the power 3, we're still much smaller than the modulus, and actually to calculate the log of this value over the real is something that's not very complicated, so we need to make sure that M is the number that we encrypt is large enough, so it will actually go over the value of the modulus. Okay, now another problem, let's say I want to encrypt the information for my application, and my application can answer yes or no to a specific question, so it either encrypt the value of zero or one. So this is vulnerable to dictionary attack, it basically means if I encrypt the value zero multiple times, you can see that it's the same value each time, and it's easy to detect repetitions, so in order for our scheme to be secure, we need the value M to encrypt to also be random. Okay, so here is pkcs number 1 vision 1.5 to the rescue, and what it does is basically uses a padding scheme to pad and then encrypt the plaintext, and what it does, it basically pads the plaintext to the RSA killing to make sure that the number is big enough, and it adds the randomization. Okay, so this is the example of what is done in TLS 1.2, basically we start with an encryption preamble, it's two bytes, zero and two, that states this is an encryption process, then we have eight random non-zero bytes, this makes sure that the number is random, and also the number is large enough, we have zero delimiter, because in the general scheme we don't know what is the actual size of the plaintext that we want to encrypt, although in TLS we do know it, and in TLS we have 48 bytes of the actual pre-mastered secret that is generated by the client and sent to the server. Okay, so we have, now we have what we call black-and-buckers attack, and it's a very old attack from 1998, and it's an adaptive chosen ciphertext, and what it does, it exploits the strict manner in which RSA pick-and-say padding is done, and actually what we have is we have this innocent client, he sends this ciphertext to the server, and we have the malicious attacker, he listens to the ciphertext, and then he uses the same server as an oracle, and basically what he does, he sends, he changed the ciphertext, he sends a mild version of the ciphertext to the server, and then he checks with the server, does this, the plaintext that corresponds to this ciphertext, does it start with 02 or not? And with this response you can continue the attack, this is an adaptive attack, and in the end is able to decrypt it. And there is also a very similar attack, that's from 2001 by Manga, which is aimed at another newer padding scheme. By the way, I want to thank people in the audience here for this wonderful slide. Okay, so we have this amazing Blachenbacher attack, and we're not going to go into all of the details of the matter, although this is really a beautiful paper in my opinion, I'd recommend people to read it. It was an attack that was used to be called the million message attacks, because you need about a million messages, the million queries to the server in order to decrypt the message. So this is quite a large number, but in general the exact performance is dependent on the very specific properties of the Oracle that we have, so we can actually get it in much less than one million messages in some cases. But for this talk, what the main field that I want you to remember is that the attack is adaptive choice in ciphertext, and which means that no matter how lucky we are and what kind of optimization we are able to do, if we want to decrypt the 2048-bit RSA key, we need 2048 sequential Oracle queries. There's no way to go around it. Okay, so what is the goal of our attack? The goal of our attack is in most field in life we want to get cookies. And the reason we want to get cookies because this is a very efficient way to get the users the data, the session cookies is what enables us to connect to our web server without re-inputing the password in each connection, so it's very comfortable for the user, and it gives, if you've got a session cookie, then you can get access to the web server, and then you can download all of the victim's emails or information. You don't really need to try to decrypt each and every TLS communication that the client makes. And those cookies are sent in the beginning of each TLS connection, and so we have this attack scenario for this RSA kick change. We're going to sniff one TLS handshake and the first message that is sent by the client, and then we're going to use black and butter to decrypt a pre-master secret, and after we get the pre-master secret, we can decrypt the first message, and then we get a cookie. So all right, it's very simple, very elegant attack, and this is how it's done if we have, in the case of cache timing side channel. So in our example, we have a bank that has a very secure HTTPS server, and it runs on a very, very secure cloud service provider, and we have Mr. Smiley here. Mr. Smiley wants to connect to the bank, the cookie monster listens to the safety communication, and afterwards the cookie monster starts to do the black and butter attack, and it gets the keys, and excuse me, for that, we have our cache attacker. He runs on the same hardware inside the cloud, but on different virtual machine or different process, and is able to measure the cache side channels. So we have the cookie monster, it cooperates with the cat, and they're able to retrieve the cookies. Okay, so this is something that's actually simple, but we are very greedy people, and also the cookie monster is very greedy, and we want to have all of the cookies. Okay, so 6% of the connection is not enough for us. So what we want to do is we want to use the RCA key exchange for a downgrade attack. Basically, what we want to do is to attack a connection using a formal DFI element or some new server suite and cause it to downgrade to another RCA key exchange. And the nice thing about it is basically it only requires the server to support RCA key exchange, even if you have a client that doesn't support this at all, we can still do this attack. I think it was mentioned before, but using the black and butter attack, we are not only able to decrypt messages, we're able to cause the server to sign messages, so we're able to sign the formal DFI element keys, and the nice thing about it is it also works on TLS 1.3, it was already shown before. In fact, I feel this is the only currently known possibility of a downgrade attack on TLS 1.3. It does require an active man in the middle attack, which is a bit more complicated, but something we can leave it. So the question is, do we have the cookies? And unfortunately for us, the answer is no. And the reason for that is that the TLS session has a timeout when we start the TLS protocol after 30 seconds, if the protocol doesn't finish correctly, the client will probably abort this protocol. And unfortunately for us, as we said, this attack takes quite a large number of messages, we can't finish it in another 30 seconds. Okay, so the first nice thing is that we can still try to do this attack on Firefox. And the reason for that is that there's a way to prevent timeouts in Firefox TLS and check. I'm using something called TLS warning alerts, something that's known since the long-gim paper. I think that they recently decided to fix this, but something that's been known for several years. So basically what we can do, we can do the man in the middle downgrade attack. We can keep the session alive for relatively long time during the panic attack itself, and then we can finish the TLS and check after we decrypted the pre-master secret. So do we get the cookies? And there's still one caveat left, and this is the problem that a user that's trying to access his bank account might notice that it takes more than 10 minutes for this to website to load, and something that might look a little bit fishy to the user. Okay, so we want to be able to still do the attack, so we use the beast attack. And the beast attack is a very nice idea, and basically it says that if we can run JavaScript code inside the browser of the user, this JavaScript code can try to even repeatedly open TLS connection to the bank account. It can do it in the background. The same TLS cookies will be sent to the server, and we can do this attack, and the user doesn't need to know anything about it. It's nothing really, really nice. And again, at the start of each connection, the same cookie is sent in the first packet, and so even if we have multiple connection, we only need to break just one connection. So we have the cookie. Okay, so let's see how does this affect the attack scenario. Now we're just going to attack Firefox. So we have the same bank with the same HTTPS server, and now we have Mr. Smiley that, like me, uses the Firefox browser, and he wants to access his bank account. So access the bank account, and what he sees is that he doesn't have any money in the account. So Mr. Smiley is very, very sad right now, and what he does when people are in this kind of despair is looking for the Internet to find big prizes. But as we all know, nothing comes for free, and those sites usually hide a very malicious attacker. In this case, this is the Cookie Monster, and what the Cookie Monster does is sending this malicious JavaScript code to Mr. Smiley Firefox browser, and it now starts to try to reopen the connection, and the Cookie Monster can try to do them in the middle attack. Again, we used the cat in the same cloud provider, and we managed to get the cookies. Okay, so now we are very, very happy, but we still have one problem, and the main problem is that I really like the Firefox browser. I think it has very good properties, and I don't want to attack only Firefox. I want to be able to attack all of the users. And in most browsers, we couldn't find any way to try to delay this timeout, so we have only 30 seconds to finish the TLSN check. And the problem is that the expected number of queries that we need in order to implement this black and black attack is very high. However, a nice thing that we can see is that with a relatively low probability, we might get really, really lucky, and the attack might finish much sooner using a much smaller number of queries. So this is something that we can actually use. We can use the Beast Attack and try to re-run the attack multiple times, and again, we want to get a session cookie. We need to be successful just once. Maybe one out of 1,000 is really good enough. Okay, so can we use this to get the cookies? And unfortunately, still not. The reason is that we need at least 2048 queries, as we said before, and we have time for about 600. So unfortunately for us, we need to do some more work. Okay, so let's try to paralyze this attack to make it go faster. Many companies reuse the same certificate across multiple servers around the world. We want to have a server that runs in Europe and one in the US and one in Asia. And usually they use the same certificate in all of the servers. I'm not sure if there's a really good reason to do it, but it really helped us in any way. And we can actually try to paralyze the attack across those servers. So each server is a separate oracle. There's been many, many previous work that talk about how we can paralyze this type of black-and-bucker attack. So is it enough to get a cookie? And again, the answer is unfortunately no, because as we said before, we need at least 2048 sequential adaptive queries. So we need to hear the response of the previous query before we can continue, and we only have time for 600. So we can't do this attack. So we need to find a way to paralyze this attack in a new way. And before we start, we'll give a little bit of background about the Manger attack. The Manger attack is again padding oracle attack against RSA. It uses a more powerful oracle than the one that's used in black-and-bucker. And the Manger oracle looks something like this. The oracle receives a ciphertext encrypted with RSA, and it simply returns one if the topmost byte of the plaintext is zero or not. This is it. And it's more powerful because the chances that we will find such a ciphertext at random is 1 over 256, which is much better than the regular black-and-bucker oracle that requires two bytes in order to succeed. So we start the attack with what we call the blending phase. We simply generate random numbers, random s-values. We encrypt this s-value and then multiply it to the ciphertext. Due to the malability properties of maybe semi-homomorphic properties of RSA, the plaintext is simply m times s. And then if the oracle returns one, then what we know is that m times s is smaller than 2 to the log 2 of n minus 8. We know that the 8-top bits are zero. So if this is the whole range of the possible values of m times s, now we know that this topmost part is not possible and we narrowed our search range, which of course is still really huge. And the way that this attack continues is by iteratively reducing the possible interval where m times s lives in. And actually after doing maybe extra i-sequential queries, what we have is that we know that m times s is in some smaller interval, which is denoted by a, i, and bi. And then we know that there is some value, r i, that if you take m times s minus a y, this value is small. We don't know what the value r i is, but we know that it's smaller than the size of the interval. And the way that this attack works basically we do a query, each query removes about half of the possible range, then we can decrease the range so that we're somewhere in this much smaller range. And in the full attack what we do is we continue until this range becomes size of one and we actually retrieve the the plain text. However, in our case, we don't have enough time to do it. So we can do only about 600 queries out of the required 2000. So we still have a very, very large range and so the attack is not good enough. So what we're going to do is we're going to use what we call the cookie lattice. And basically let's assume that we can do this attack in parallel with k attacks. And for each attack we have a different random s value that was used for the blending. So we have k formulas of this format where we have this value that we don't know, which equals to m times s minus the start of the interval. And this is very similar to the very well-known hidden number problem. And this is something that we know how to solve using lattices. So basically finding m is reduced to solving the closest vector problem in a lattice. We can embed this in the shortest vector problem of a lattice and then solve it relatively using LLL. And this is the lattice that we create for this attack. And in the end what we need is about five servers in order to decrypt 2048 bits RSA using this mangrove oracle. So in the end we get the cookie. Okay. But there is something that's a bit interesting about this. And the interesting thing about this is that this is not an optimization for the attack. This is a trade-off. Actually the initial blending phase that we mentioned that trying to find some value s that is the oracle returns one on is more expensive in some way per bit than the other stages. So if we want to run this attack in parallel, we require more queries than the original attack needed. So you can ask why do we want to do it? So the reason is that it gives us a trade-off between the total number of queries and the number of sequential queries that we need. And this allows us basically to finish the attack in under 30 seconds. And the attack scenario now looks something like this. We have the same attack with the same smiley, but now we used in this case four servers in parallel in order to run the attack. We do the same attack as before, we get all of the results, we put them in the lattice, and then we have the cookies. Okay, so this is the full attack scenario. But so far I didn't mention anyway what are the actual vulnerabilities of the black and black attack. And actually why is it so difficult to fix this problem, it's known from 98, we still we still see it in 2019. And the reason is that it's very hard to reduce the time variability when you do RSA kick change. Basically the main goal that we want to achieve is that no matter what the pkcs verification function returns, if it's 1 or 0 the TL ascension should continue in the same manner. Basically the way that the black and black original attack was mitigated in TLS was that if the padding scheme fails, then we generate a random key, and this is what the server used to continue the handshake. The handshake will fail in the end, but it will be very difficult for the attacker to know if it was because of the padding that it didn't succeed or because the random key is not the key that was sent by the client. So we need to do it in a way that's very difficult for the attacker to differentiate if we use this random key or the original key that was sent by the client. So this is something that's very, very hard to do with very low time variability, but most of the implementations that we check managed to do it relatively well. However, it is very, very, very hard to implement this in what we call full constant time. Full constant time means that there's not a small time variability. There is no time variability. The software actually behaves the same way no matter what the padding check returns. And as we've seen many times before, when we have pseudo constant time implementations, they are only pseudo secure. And this is the reason we have all of this nice table. So in this table you can see that there are three different categories that we have, the data conversion, the PKS verification, the TLS mitigation, which we consider there three layers in the way that we want to mitigate this type of attacks. And each layer has its own type of vulnerabilities. So we start with the data conversion. And what do we mean by data conversion? Basically RSA decryption or encryption, as we said before, it's met, it works with big integer numbers. However, the PKS scheme works with bytes. Also, this is the way that we handle the keys and the information we want to encrypt. So we need to convert from one to another. So after we decrypt this effort X, we get a plain text number and we want to convert it to bytes. And this is, depending on implementation, it might be a relatively hard thing to do in constant time. And when we view the different implementations, what we found is most prominent was a conditional padding with zeros. If the encrypted number was small, we need to pad it with zeros in the top bytes. And this is something that was actually very hard to do in constant time. There was conditional branching on the exact size of the padding. And again, the timing difference that was caused by this type of branches is actually negligible. It's very hard to measure it from outside. However, we're using cache attacks is something that's sometimes very easy to find. For example, there was in one of the implementation a conditional call to memset if we need to set one or more bytes to zero. This is something that when you do a cache attack is very, very easy to see. And the vulnerabilities are very problematic here because they rise from very, very low level serialization function. This is something that the people that actually implement the TLS mitigation usually work in a very different level. We've seen cases which the TLS black and black mitigation and this function were in different cryptographic libraries altogether. It's very hard to understand that this is something that might hurt you in the upper layers. Another layer of attack is the PKCS verification itself. And this is a relatively complex check. It requires multiple validity checks. We need to check that the two top most bytes are zero and two. We need to check that we have a large enough number of random bytes before we get zero. We need to check the link of the plaintext. All of these checks are done one after the other. And what we found is basically a lot of unconstant behavior. For example, conditional calls to memcopy. In some cases, the library only copied the plaintext to that sort if the padding is valid. Otherwise, they simply leave the buffer alone. There were conditional writes to error log if the verification fails, conditional branching of the different validity checks which was nice because they gave us many types of more efficient black and black oracles. And again, timing difference is relatively negligible, but it's very easy to detect. The last layer that we have is what we call the TLS mitigation. And as we said before, the goal is to keep the same behavior if the verification succeeds or fails. And if the verification fails, we use a random key. So on this layer, what we found is, again, conditional branching on the verification result, conditional memory accesses, and even in some cases, the call to the random generator function was only done if the verification failed. And again, apart from this random key generation which actually takes a lot of time, most of the timing difference are negligible. Okay, so we will try to sum up our results. We show several techniques for microarchitectural padding oracle attacks. We found that seven out of nine implementations that we check were vulnerable. We show a proof of concept for the manga and black and black attack. We show how we can boost the attack efficiency using this type of attack, and we show how we can paralyze this attack to do download attack, and we have proof of concept for attack using manga oracle and NLLN. Okay. So basically the goal of this type of research is to help make the internet more secure. So the main goal is to try to fix this type of vulnerabilities. So we started a relatively long disclosure process. There were seven vendors that we tried to disclose to us, and from the beginning we thought this is going to be a relatively hard thing to fix because of the wide range of layers that we needed to fix in order to close these vulnerabilities. So we decided to give 120 days in Barro period to make sure that everybody will be able to fix it. And this is something that's relatively difficult because you have here some very large companies with people that are paid for and some open source implementation that there's one guy that does all of the coding on his own spare time, which somehow coordinate all of these different parties. In the end, all of them patched their code with some various level of successes. We have some libraries that we feel that didn't patch it well enough, but most of it was okay. And I think we learned some lessons from this type of large disclosure process and I feel that it's something that maybe we as a community need to try to think about how we do this disclosure process and how we make sure that the people we disclose to cooperate in a good manner. And when we disclose to a specific vendor, then everything is okay, you can coordinate the time of the embargo. If it needs a little bit more time, you can try to extend it. If it fixes it very early, you can try to disclose sooner. But what happens if there's one company that wants to disclose in about one week and another company that says it's going to take 90 days. My belief is that we should give the time for at least a reasonable amount of time for all of the parties to have time to fix it. Unfortunately, as we've seen not all of the vendors that we disclose to fought the same. And we had for one example, we had one of the vendors said to us that he's not willing to keep a long embargo period he's going to give us most two weeks after we disclose it to and then he's going to do a public patch. And this kind of public patch is basically releasing a zero day on all of the other vendors that didn't patch it, because those attacks are nice, they're not very complicated to understand, especially after you diff the source code of the changes. With this vendor, it was very relatively easy. We said, okay, we'll contact you again in about nine weeks and then we'll disclose to you after all of the rest of the vendors decided to fix. What happens was that after two months he returned to us and said okay, maybe I don't want to hear it now. And then we disclose it. And then what happens is that we discover that one of the vulnerabilities wasn't in a code that he's responsible, but some other open source library is responsible to. And then we got a message from this poor guy that says I'm doing it on my own free time and you want me to do this entire large patch over tense giving I need to have at least one week of unpaid vacation for more to do it. Is it possible to postpone the disclosure process? So again, because we figured that the main goal here is to make the internet more secure, we ask all of the other vendors, are you able to postpone it and we postpone. One main issue that we had is that one of the companies which I won't mention its name although it has been mentioned before in previous talks decided that they want to do a better test of the patch. And then they called at some point and said, okay, two days ago we already patched it and we sent it as a better version to the better channel for people to test. We claim that if for example I would be kind of attacker that wants to find zero days on product, I would probably try to be on this better tester list, try to get those early releases and see if there's anything interesting in them. And the answer that we got was yes, that is an interesting point. But we don't really care. In the end, I'm not sure that especially in this type of attack it's not something that's very easy to implement in practice, but I feel that I for once will be very careful in disclosing to this company again when we having multiple vendor problem. It's something that I feel that community should try to see how we can share this kind of information. So maybe we can find a way to cause these companies to behave more nicely in this type of situations. Okay. And so after having said all that what are the recommendations that we have from this type of attack? So in the paper we have many HEDOC tactic recommendations how you can try to prevent this type of attack in the different layers. But the bottom line is simply don't use RSA key exchange, it's something that has failed us too many times in the past and we need to find we have better alternatives, we need to find a way to actually deprecate it. However if you really really really really really must use RSA the main thing to do is please separate your certificates. When most companies that we check they reuse the same RSA certificate both for key decryption and for signing for and this what enables all of the downgrade attacks if there will be separate certificates for decryption and for signing and if it will be possible to use different decryption certificates in different servers this will make this type of attacks not practical at all. But again please just stop. This is something that we got on twitter response to the paper Okay. And in another conclusion what we will say is that mitigating padding attacks on RSA is not something that's impossible. We've seen at least two libraries that were able to do it and I feel that several libraries are now also secure but it's something that's very very close to impossible and I think that's something that's mainly rest on people that designed this cryptographic protocol try to do it in a way that it's much easier for people to implement it safely. We're going to see another talk today about a similar issue. It's not fair to just say all of these problems are implementation problems and the programmers did something wrong this is the fault of the people that designed the protocol. The protocol should be easy enough to implement. And if you want the full paper and other information is in the website and I will be happy to take questions. Thank you. So how did you actually find the cache side-chain attacks? Was it many a code review? Or is there something to cooperate into some continuous integration environment or something? I would really want to say we have really excellent tools to do it automatically but the answer is very long manual code. Okay so next year cut 10, yeah? Probably yes. Are there any other questions? Can you please repeat the question? They care about this. I think this is the main thing. They did a lot of work to try to fix all of the vulnerabilities. They forked out of OpenSSL which were almost safe to begin with and they just fixed something that I think that the vulnerabilities for OpenSSL was something that was known to be a problem. They didn't think it was exploitable but they knew and OpenSSL decided to take their time to fix it. OpenSSL fixed it a little bit later. On the other end OpenSSL was written from the start as something that's supposed to be constant time and the code is much, much simpler and that's one of the reasons why they were able to do it. Any other questions? Maybe I have a question so you have many experiences with different vulnerabilities and cache side-channels What is easier to prevent? CPC padding attacks or black and buckers depending on the relational and Lucky 13 and this stuff? I think that Lucky 13 if you're willing to pay not a very large penalty in performance it's something that's relatively easy to fix and if you want to pay the price either performance or to break some of the abstraction layers in the software which is something that many programmers really, really hate to do but if you really need to do it it's relatively easy to fix and this type of attack because it has many different levels of code that each one of them might affect the other I think it's a much more complex thing to fix Okay If there are no other questions then thanks We'll talk about that once again