 So then we're already at the last talk of this first part of the post quantum session so Moving back from SIDH back to lattices So this talk is about differential fault attacks on deterministic lattice signatures so this is a work by Leon Groot Bröndering and Peter Pessel and Peter will give the talk Thank you for the introduction So it's probably not a well-kept secret anymore that many digital signature schemes are susceptible to nonce Reuse so if you sign two different messages Using the same nonce and you can then easily recover the secret key and this happened in the past A solution to this problem is to make the whole signature scheme completely deterministic So instead of using a random nonce you derive it by hashing The message M together with some secret K and that's what's actually already done in at DSA and deterministic easy DSA Now this is all nice But it also opens a problem that of differential fault attacks. Namely you can sign the same message M Here same message M twice. This means you will get the same nonce It's deterministic and then you induce some sort of computational fault after Computing this nonce. So you will have what looks like a different message So you will have a nonce reuse for different message and you can recover the key now This scenario was already explored for elliptic curve based schemes But then we have to ask you questions. Is it specific to elliptic curves? So can we do it on other schemes as well and that's of course where the lattices come in So what we do is we extend differential fault attacks to deterministic lattice signatures Namely to the lithium you should have heard of the lithium and to Q Tesla Both of which were submitted to the NIST competition or NIST call and are both deterministic Now that such differential fault attacks are possible here isn't all that surprising given that Delifium and Q Tesla shares some design similarities to their elliptic curve counterparts But there are some design peculiarities that set them apart from ECC like for instance rejection sampling key compression and it's also possible to to derive some more efficient And new attack paths like efficient exploitation of partial nonce reuse Okay, so in this work we focus on mainly the lithium Which is why I'm not going to explain it very little So you've already heard some stuff and just some things that have not been said yet But all our attacks carry over to Q Tesla as well, you know quite straightforward manner They both signature schemes are are somewhat similar Now Delifium is based on our module LWE assumption So it has polynomials with some base ring, which is at least for let's base signal Let this base group to graph cryptography somewhat small and then it works with vectors and matrices of these polynomials, so in key generation you have two vectors of polynomials with Somewhat small coefficient, so they're in a specific range and you have a random and public a so a matrix of polynomial And the public key is a s1 plus s2 So I wasn't had what hasn't been said yet in the previous presentation is that it uses determinism to protect against randomness Not only that but I'll come to that later Okay, so this is the main framework of the lithium So first what the first two is you sample this nonce y in a deterministic fashion So you can also call this the noise whatever you fancy Then you multiply it with this a Hash it together with the message to get the C and then you compute set equals y plus C Cs1 so in other words, it's a feature mere schnoi like signature scheme But what's new in Delifium or what's what's common in many lattice-based signatures is this rejection sampling so where you test if your outputs that follows some distribution and If it doesn't then you read restart this whole Signing process and in Delifium this rejection sampling is essentially just an coefficient wise range check Now Since this this this sampling of the nonce y is deterministic It's easy to see that well if you have the same message M You get the same y and this is then kind of a nonce reuse But of course you can't use a nonce reuse if you don't Have anything else you get the same output you can't extract any useful information from that So you have to change something else and you do this by injecting a computational fault So in the case of Delifium this would some look like something like this So you have one message M You sign this twice once just regularly and the second time you Compute the signature, but you inject a fault such that the nonce y is identical But that the C is something different Then you can set up these two equations set equals Y CS one once without fault once with the fault Subtract one another since Y is identical it cancels out and Then you what what you have left is a system in which the only unknown is the private key so you can easily Recover this private key. So that's quite similar to what happened in the ECC case Something about the lithium is that it uses key compression and due to this key compression. You can't easily Recover this second key part s2 But in the paper we show that An attacker can still forge signatures even if he only has access to this s1 Now, but there's one other thing we kind of forgot and skipped over and that it's And this one thing isn't that rejection doesn't only hurt in your real life, but also for our attack So what we have here is Unlike ECC or stuff like that you Sample this nonce y by using a hash from KM and some copper. It's copper. Copper is a rejection counter so this counts up and Is used so that you have a fresh Y in each of the iteration each time a signature gets rejected Now to have a nonce reuse you have to of course Use the same copper again This means that the key recovery is only successful if both the non-faulted execution and the faulted execution Accept the signature in the same iteration Now but as soon as we inject the fault We will also influence the intermediate used in the rejection Conditions so it might be that we inject the fault and due to the faulted values the signature is not accepted anymore And there unlike ECC based counterparts, so here the the fault position which Which target you effectively fault determines the success probability and we have a look at five Concrete fault scenarios so concrete position So the first one is the probably the most straightforward one you want to have a different C as challenge vector So what do you do your fault the computation of C? So this hashing operation here There what you can do is observe that In this equation set equals Y plus C as one if you have a look at the Distributions of these two variables you you'll see that well Y is is uniformly distributed in the range of plus minus five hundred thousand something and This C as one is kind of like in the range plus minus 50 or so so this means that Using just a different C won't affect set that much and since the first rejection condition is a range check on C This means that if it acts if if the signature was accepted originally then it will also likely be accepted With a different C. So this means we have a success probability of over 90 percent So but it's not only the direct computation of C that we can fault we can also fault any any result that goes into this Like for instance the computation of this W. So of the polynomial multiplication This multiplication uses the entity you've already heard of that and so the entity is a FFT over a prime field So it uses same Implementation techniques butterflies butterfly network and stuff and there if you have a look at this butterfly network It's quite easy to see if you inject a fault on the left side on the input of this Then a fault will spread to all of the output coefficients Whereas you fault on the right on the output only a couple of coefficients will be affected and the more coefficients you change the more Likely it will that that your new signature your faulted signature gets rejected So you have a success probability between 25 and 90 percent On depends on what you exactly fault, but overall of course this multiplication a larger target than the hashing Okay, then we can also fault the public key So this is loading of the a there we have two sub scenarios because this a is not to storage directly But generated from some seed and you can either attack this row directly this value or the extendable output function So and depending on what you fault you are between 25 or 54 percent But what's the what's interesting here is if you inject a fault into row you can think of yeah Maybe that could be also be a permanent fault and Finally Somewhat difference in the area is that you fault the nonce why what If we fault the nonce then the output will be different. So it's not a nonce we use anymore Yes, but what we do is we target a partial nonce we use so the nonce is somewhat similar to the original one But not the same There we exploit that the lithium uses the shake XOF quite extensively And if you have a look at this sponge structure Well, it's easy to see that if you inject a fault like for instance in the second application of catch F in the squeeze phase Then all the output coming before that H0 and H1 will be the same And everything behind that will be different So and as why is sampled from this output? We then have Two different where vectors y where the first couple of coefficients will be identical So that difference is zero and only the last couple of coefficients will be different But still this difference is way too large to to only apply proof force So what we do is we return this problem of key recovery to a lattice problem where we have some target T which is computed from Public Output from the public signature and then we can look for the key or then we have a Closest vector problem so we look for a vector close to the lattice generated by this term and The diff and we know that it's close because the difference between the target and the lattice point is exactly our key That has small coefficients Of course, we have to apply some restrictions on the fault So we can fold two out of the five catch F permutations that are needed in the squeeze phase And the computation of runtime of the lattice reduction below one minute and we have a success probability of 24% Okay, so we did simulations, but we also did experimental verifications so we Did clock glitches on an arm cortex m4 and for each of the fault scenarios a single random fault is sufficient And what we also did is that we we benchmarked the runtime of all the different scenarios So how much of our signing time is susceptible to a fault and there you can see that well Faulting the hash directly has a high success probability but it's a coin of a small target and the The expansion of the public key takes quite a lot of time Doesn't have that high of success probability, but appears to be a good target and for this FW We just did many real experiments and then get got the average success probability About points six or so Okay, so now that we have the attack we can think about counter measures in terms of the generic counter measures Of course the first thing that comes to mind is double computation at the run doubles the runtime, of course And to attack double computation you need to either inject the same fold twice Or you have some permanent fold like for instance if you are able to manipulate this seed now since Verification is quite a lot faster than Than our signing what you can alternatively do is you take the signature you get and then verify So this takes only about one quarter of the runtime But what's interesting here is that well if you fault the generation of the nonce All what you do is you sign with a different nonce and that's still a valid signature so using this scenario you can bypass this counter measure and a final counter measure that Can protect against all our scenarios and it's hardly any overhead is What we call additional randomness. So what you do is you don't only Output KM and copper to this deterministic sample, but also some random R Like a random bit string now this protects against Fault attacks against bad randomness because if you insert if you set R to be a constant all you do is Switch of the fault counter measure again And if you do it correctly it could also be used against some sort of counter measure against DPA recovery of this secret K And in fact Q Tesla Already added this counter measure so in an update That came after the initial publication of this work They made the added this counter measure in it's now mandatory. So the attacks don't apply anymore And that's why I had a star next to Q Tesla in the beginning With the lithium there is a little bit of a problem because the actual security proof that the lithium users Requires this determinism so the proof requires that the signature scheme is deterministic and it loses tightness with the There is a an alternative version that doesn't require this Determinism but it's not tight and it loses our security in the number of of of Signatures that you see per message. So that's a bit of a problem So if this didn't really contributes to attack is of course an open problem, but yeah, it's still violate some security guarantees Okay, that's then the end of my presentation. Thank you very much for attention Any questions for Peter? So maybe one for me so on slide 13. I think it was so you showed these Yeah, these different attacks and How much time they take them for the total signature time so if I just I'm not targeting anything I just let it run and I shoot this a fold. Yeah, that's that's there. That's the total Yeah, that's like about 30% of the runtime. So if you're really blindly just shoot at the thing part You can't be more Targeted like for instance this expansion of the of the Probably key will always be somewhere at the beginning So if you target this then you will probably have a higher higher success and then yeah with Chance one in two slightly above. Yeah, we'll succeed. All right, and then next slide. You said like if you add randomness I agree. That's a good countermeasure, but why doesn't that add anything to the runtime getting good quality randomness? Yeah, of course getting randomness isn't if you have it somewhere already. Yes, so of course that's Okay The 0% is if you have it ready and everything else is already set up if you have a hard drive RNG or I don't know what It will add something because if you want to also protect against DPA you will likely Fill the first block of shake with random stuff and so it will have a higher runtime, but Negligible compared to the other stuff Okay Any more questions if not let thanks Peter and all the other speakers in this session