 So, as Anja said, I'm going to be talking about post-quantum signature schemes today. In particular, I'm going to talk about a proposal that we have for a new signature scheme. This is joint work with a lot of people, it's a group of eight of us who've been working on this together. So to begin with, what is post-quantum cryptography? So the idea is, we all have known for a while that a sufficiently powerful quantum computer can break a lot of the hard problems that our crypto relies on. So it can factor numbers, it can compute discrete logs. So that's going to break essentially all of our known public-key crypto, all of our encryption schemes, key exchange signatures, so for example, in the setting of signatures that I'm going to be talking about, that breaks RSA, that breaks DSA, that breaks ECDSA. So the idea of post-quantum cryptography is maybe we should try to design schemes that are going to be secure even once we have these quantum computers available. So these are schemes that should be able to run on a classical machine, because we're not all going to have quantum computers, but they should remain secure even against an adversary who does have a quantum computer. So the question is why now? I mean, right now we only have quantum computers that have a couple of qubits at a time, it's not clear how to scale this up, maybe we won't ever be able to scale it up, should we even bother thinking about this problem? And I'm going to argue that yes, we definitely should, and that's because designing and deploying cryptography is a long, slow process. So we have to come up with assumptions that we think might hold against a quantum adversary. We have to design schemes based on them. We have to then take that from the pure theory to a more applied point of view and come up with candidate parameters that we think might be secure against a reasonably sized quantum adversary. Then we have to try to analyze and attack these schemes to see if the assumptions we make actually hold. Then finally, when we've settled on some schemes that we think are secure, we're going to want to try to optimize them as much as possible, so it's the overhead that we're taking from moving to this post-quantum world as small as possible. And finally, once we've settled on schemes, we've gotten the crypto down pat, we still have to implement them and deploy them in the real world, which is another set of time-consuming tasks. So I'm going to argue that if we think there's some possibility that we're going to have quantum computers that are reasonably sized in the next 20 or 30 years, we should start working on this process now. And really, right now, we're essentially at somewhere between steps one and two. Okay, so great, I've convinced you we should come up with post-quantum schemes, hopefully. So how are we going to do this? I mean, we said all the assumptions we're used to working with, factoring discrete logs are broken. What kind of problems can we assume are still hard? So there have been a series of different types of assumptions that people think may be hard even given a quantum computer, so we don't know good quantum attacks. So several of the main classes of problems, there's lattice-based problems, there's problems based on super-singular isogenes, so hardcore number theory, there's code-based problems, multivariate polynomial solution problems, and then there's some of our standard symmetric-key building blocks, which surprisingly, we don't know significantly better ways of attacking with a quantum computer than with a standard classical computer. And that last bullet is what we're going to be talking about today. So we'd like to design a post-quantum of signature schemes, and right now, we're using ECS DSA often, which is great, it gives us small keys, it gives us small signatures, it gives us relatively fast signing and verification, but we know it's insecure against a quantum adversary. So the question is, when we move to a post-quantum world, is there anything comparable that we can use in place? And we know we need to study the assumptions and attacks, but even on an efficiency level, sort of what's out there. And I'm going to argue that right now, there isn't really anything that quite matches what we get from, say, ECDSA, and that's still going to be true after this talk, unfortunately. So the closest we get is maybe the stateful hash-based signatures, but they have the downside that they're stateful, which some applications that just doesn't work, other applications, it doesn't fit well into sort of existing systems. So what I'm going to present today is a signature scheme that's just based on symmetric primitive, so we need a hash function, or maybe a hash function in a block cipher is actually what we're going to use, and concrete, really, what we're going to use in the scheme that we propose is a shake and a new block cipher that we call low MC, although in general, we could instantiate our signature using any hash function and block cipher. In terms of efficiency, it'll be sort of on the same order as everything else, so not perfect, but not terrible, hopefully. But I think what's nice about this is it's a new approach and a different approach from even previous hash-based signature schemes. So it's not using this sort of Merkle tree type approach that existing hash-based signatures use. So that's nice because it gives us a diversity of approaches, which is always good in case something ends up getting broken. But it's also nice because it's a new approach, so it may be something that has a lot more space for optimization. Okay, so now I've given you the high-level idea of what Picnic is and why it's going to be cool. Oh, I should say our scheme is called Picnic, so I'm going to start out by giving sort of an overview of Picnic and how it's constructed and sort of the intuition behind it, and then I'll talk about performance benchmarks that we did in some of the example settings that we looked at. Okay, so our basic approach is going to be sort of similar to like a DSA, ACDSA type approach. You can think of the public key as going to be basically a function of the secret key. So you can think for this slide and the next couple of slides of the function as being a hash function, it'll actually be something different from that in the end. And then a signature is going to be a way that the signer is going to prove that he knows the secret key that corresponds to this public key. And of course, the signature has to include a message somewhere, so you can think of the message as being sort of a nonsense. It's a way of sort of randomizing or changing some of the random values in the signature. So of course, if the signature is going to be a proof that the signer knows the secret key, it's also going to have to be something that hides the secret key. Otherwise, if we leak the secret key, there's no point. So it's going to be what we call a zero knowledge proof, so something that's convincing, but that doesn't reveal the secret. So that's great, and as I said, that's the way a lot of signatures can be viewed, but what we're interested here is in a post-quantum signature scheme, so what we need is this hard to invert function F, and the signature scheme should both be secure against a quantum adversary, and that's of course the challenge, and still moderately efficient. So I'm going to talk about these two building blocks in turn. So the first thing I'm going to talk about is the zero-knowledge proof system. So the proof system that we start out with is something called ZKBOO for Zero Knowledge for Boolean Circuits that was introduced by Giacomelli, Madsen, and Orlandy in 2016. So basically it's a zero-knowledge proof system that's tailored for proving statements about circuits. So what I mean by statements about circuits is that you have a public circuit, so a bunch of AND and XOR gates, you know, a bunch of input wires on the left, output wires on the right, and you can think of this as in our setting the hard to invert function F, so this is maybe the code for your favorite hash function. And then in this proof system, the prover wants to be able to prove that he knows some set of inputs such that when you evaluate the circuit on these inputs, you get a particular public set of outputs. And in general, that's, you know, you can't necessarily look at a circuit and a set of outputs and say if there even exists a set of inputs that would map to those outputs, that would be, like, say, finding a pretty much of a hash function or something like that. So in our setting, in the signature setting, the prover is going to be the signer, this set of inputs is the secret key, the circuit is this hard to invert function F, which we're calling a hash function for now, and the output value should be the public key. So the, right, so this is exactly what ZKPU allows us to do, and the nice thing about it is that it doesn't use any number theoretic assumptions, it's just based on hash functions and a pseudorandom number generator, and the cost is going to depend on the number of AND gates in the circuit and the security level, of course, and a tiny bit on the number of XOR gates, but mainly the AND gates. OK, so I'm going to try to give you sort of a vague flavor of how this works, this is not actually how this scheme works, it's just to give some intuition so that it's not all complete magic. So we're going to start with this very toy example where we have a very simple circuit, which is just an XOR gate, so we have two inputs, A and B, we have an output, C, and the prover wants to prove that he knows A and B such that A, X, or B equals C. So obviously that's trivial, you know, anybody could find such an A and B, but it's at least going to give us an idea of how these schemes work, and we're going to say this should be hiding in the sense that the verifyer who's checking this proof shouldn't be able to tell which A and B you chose, so, you know, if C is one, was it a 1, 0, or a 0, 1, something like that. OK, again, obviously, it's a real and just a toy example, so what I'm going to describe right now is an interactive protocol, so two parties are going to send messages back and forth and we'll talk later about how to squish it all into one message that the prover can send. OK, so how does this work? So step one, we're going to take each of those inputs, so the A and the B, and we're going to break them into two random shares. So we'll pick random, they're bits, so random bits, A1 and A2, that XOR to the bit A, and same thing for B. So that's step one, step and the nice thing about this I should say is that if I show you just A1 and B1, that tells you nothing about A and B, because A2 and B2 could be anything to explain any A and B. So the next step is, this is a little hand-to-hand movie, but we're going to essentially try to evaluate our circuit, but just using one set of the shares. So we're going to XOR together A1 and B1, get C1, we'll XOR together A2 and B2, get C2. And then finally, we're going to do what I call committing to the shares, so we're going to pick some random strings, which will just act as sort of extra randomness to hide things, and we're going to hash A1, B1, and this random string R, and A2, B2, and this random string R2. And these sort of fix what A1, B1, and A2, B2 are, but because of the randomness, they don't actually reveal the bits. So then the C1, C2, and the two hashes are going to get sent over to the verifier. The verifier picks one set of shares, so either one or two, at random, say, fix one. And then the prover is going to send back basically the values, the number one shares, so the A1, B1, and the randomness that he used at the commitment. Okay, so now the verifier is going to verify that the two shares actually XOR to C, the C1 and C2 that were sent. He's going to verify that C1 was computed correctly, so A1, XOR, B1 equals C1, and then he's also going to check, and this is missing from the slide, unfortunately, that the hash is correctly computed, so that H1 is actually the hash of A1, B1, and R1. Okay, so that's an interactive, I'm claiming this is an interactive proof, so why is this convincing? Again, totally in a hand-wavy way. So you can sort of think of two cases. So one is supposed that the prover actually computed the two hashes using A1, A2, B1, and B2, such that they sort of match up properly with C1 and C2, and C1 and C2 are correct. Then I'm going to argue that we're done, so if you plug it in and do a bunch of arithmetic, you see if I took A1 and A2 and I XOR those together, we can call that A, if I take B1 and B2 and XOR those together, we'll call that B, then those two things XOR together are eventually going to, by these equations, have to give you C, so then we can say that that A and that B are actually values that XOR to C. Okay, so if he computed the hashes using values that would satisfy the checks, then we know that the statement is true. If not, then we know that either A1 and B1 or A2, B2 don't satisfy the checks, and that means that with 50% probability, the verifier will pick one or two, whichever one is wrong, and catch the prover. So the prover can cheat with only probability one half, at most. Okay, so that's why it's sort of vaguely convincing. I mean, you still have a half percentage of cheating, but we'll talk about how to fix that later. So next, why does it hide A and B? So we can look at what the verifier gets to see, again, totally not formal. So it gets to see A1 and B1, but we argued that those are completely independent of A and B. He also gets to see H1, well, that's a hash of A1 and B1, so that obviously leaks nothing more. He gets to see C1, which is just A1, X, or B1. Again, if A1 and B1 don't leak anything, then their Xor doesn't leak anything. He gets to see C2, which is the public value C, Xor to C1, again, shouldn't leak anything. And then the final thing he gets to see is H2, which is the hash that involves A2, B2, but because we include this extra sort of randomness value, we're assuming something strong on the hash function, which is that if we hash with a random, large enough random number that the verifier doesn't know the output will look random. So that's roughly why just looking at this proof, we don't learn anything about what A and B are. Okay, so that's sort of the very high level. And just to say, if that looks at all familiar, secret sharing, computation to say something like multi-party computation, that's because it is entirely inspired by multi-party computation. So there is actually a work that goes back to work of Isha'i, Kuchilevets, Astrovsky, and Sahai, that is called MPC in the head, which is sort of how to use MPC protocols for doing, say, zero knowledge. And ZKBU is essentially a very optimized, specialized protocol to make those ideas efficient. So what that means is we can sort of use a lot of the tricks that come out of MPC literature. So we can actually, I just talked about XORs, we can also support ands. I'm not gonna talk about how. It's a little bit more complicated than XOR, which is why I didn't do it as an example, but we can do it. We can support multiple gates because proving something about a one gate circuit is not very interesting. That's essentially just chaining many of these gates together, beating the outputs of one gate into the inputs of the next. We need to decrease the cheating probability because if we have a probability one half of catching the bad guy, that's not so good. So we can do that just by repeating the protocol many, many times with different random values each time. And the chances that we catch them, the chances that they can get through all those iterations without getting caught are negligible. And finally, we can make this noninteractive using what's called the Fiat-Chemir heuristics. So basically the idea is that right now, the verifier first gets this message from the prover that has the hashes and the output shares, and then he chooses a random share, either one or two, to request from the prover. And instead of having the verifier choose those himself, we can say we'll choose those using a hash function on all the messages that the prover has sent so far. And if we can say that the prover's chance of cheating, once he sent his first message, the prover's chance of not getting caught by a verifier is very, very tiny, then we can actually say that even if the prover can sit there and come up with a first message and try the hash function on it and see what the response is and see if he can respond and keep doing this over and over again, the chances that he will ever be able to find a valid first message for which he can respond to the hash challenge are very small. And I'm not really gonna talk about it, but that's also where we include the signature message. So we'll include that in the hash that's used to compute the 01 which thing the prover is gonna reveal. Okay, so that's all I'm gonna say about the sort of under the hood bits of ZKBOO. So if that lost you, you can come back now because everything else is gonna be at a higher level. So I was describing ZKBOO, what we actually use is something we call ZKBOO++, which is an optimized version of this where we go through and look for values that the verifier can safely recompute or things that we can represent with a small seed and just have the verifier blew up as necessary. And doing all that, we manage to reduce the signature size by more than a factor of two. The signature size is actually one of our bigger costs so that makes a difference. And then we can analyze security in the random oracle model. So we also, there's been some work looking at not just the random oracle model, but sort of the variant of the random oracle model that we might wanna use in the presence of quantum adversaries where you say that the sort of random looking hash function can be evaluated on quantum inputs. And so we also have a variant of a proof technique proposed by UNRA that you can analyze in the quantum random oracle model. It adds a little bit of overhead, but actually not as much overhead as just going back to the original ZKBOO. So it's still on the same order of magnitude. Okay, so that's the proof system that we're gonna use. So in that, where the prover wants to prove that he knows the secret key that corresponds to the public key, the next question is what F should we choose? So we need F to be hard to invert. Obviously it's bad if we can get back the secret key, just looking at the public key. And then we need the prover, so the prover signature size is gonna depend on the number of AND gates in the circuit so we want that to be as small as possible. So I was talking about F as being a hash function, but we can also look at block ciphers. The way we would do that is the public key would just be a random message and then an encryption under the secret key of that message. So we looked at a whole bunch of different block ciphers and hash functions and standard ones, AES, SHA-2, SHA-3, a whole bunch of other ones. And what we ended up finding worked the best for us was something called low MC, which gives significantly smaller circuits than everything else. So what that is, it's a new block cipher which was introduced in Eurocrypt 2015. Now they have an updated version on Eprint. It uses a sort of substitution permutation network design, but it's designed to be used sort of in crypto protocols and the nice thing about it is that it's extremely parameturizable. So it allows you to say minimize the number of AND gates that you want to use to compute the thing or minimize the depth in terms of AND gates of your circuit. It also allows for some sort of trade-offs between the number of AND gates and the number of XOR gates. You can sort of feed in different parameters for different block sizes and key sizes and for different security levels. Right, okay, so for our application we chose a particular set of low MC parameters that works really well. One nice thing is that we only have one plain text ciphertext pair that's being revealed and low MC lets us take advantage of that. Okay, so that's it about our scheme. Now I'm gonna talk about the performance. So we looked at three different parameter levels, basically each one looking at say 128 bits classical security and then half that for quantum. From that, we can get low MC parameters and the number of repetitions that we should use and for everything we use, for the ZKBOO++ we use shake as both the hash function and the pseudo-random number generator. And then you can see first results of the signature and key sizes in bytes. So, Viet Shemir is the one that I said you can prove in the random oracle model or analyzing the random oracle model. The second line, the UR is the quantum random oracle model version. So slightly stronger security claims, I guess. So you can see the signatures aren't super small but they're still in the order of kilobytes so not that bad. We also actually have a number of implementations. This one's on sort of a standard good PC but we also looked at it on like an AMD processor and a less good PC. And signature times and verification times are small numbers of milliseconds. In order to sort of look at what technically we'd be like if we wanted to use in the real world we wanted to look at some of the places the signatures get used a lot. So in particular we looked at authentication for TLS. So we added picnic to the open safe quantum library and used it to implement open SSL in a web server. We used it to create, so we ran experiments where we used it to create X509 certificates that certify picnic public keys. And then we look at using those in TLS 1.2 connections. So we retrieve a bunch of HTML files and look at how long it takes. So what we did was we wanted to look at a TLS protocol that's entirely post-quantum secure. So we looked at LWE based schemes, Sofroto and SIDH scheme. And we looked at those both with RSA as the authentication method and then with picnic. And overheads from moving to RSA, from RSA to picnic that we saw were only like 1.4, 1.7, so not too bad. And just to say this is only looking at the client side latency, there's a lot more experimentation that you could do looking at what happens, our servers were only running one connection at a time, even if they wanted to run multiple connections, looking at more realistic web traffic, a number of things. But at least it gives a vague idea that this is not gonna be a game killer in terms of size. Let's see. And the other experiment that we looked at was what happens if a CA wants to say, store their picnic signing keys in an HSM. So an HSM is a hardware security model, I think we'll module rather, I think we heard a little bit about this yesterday. The special hardware for storing keys and performing crypto operations to guarantee that the keys never leave the hardware. So we found one of those that allows you to customize the crypto in it and experimented with it. So we implemented key generation and signings, so basically to simulate the CA generating their key and issuing signatures. And then we also implemented a protocol where the HSM will receive a certificate signing request for an RSA key pair and issue a certificate. And so these are the numbers that we got. It's a non-optimized implementation because it would have to be optimized for this particular hardware. But this includes the network time forming the signing request, sending it, getting the certificate back. And, you know, so it's milliseconds and not too bad. Okay. So just to wrap up, what we're proposing is a new post-quantum signature scheme that's just based on symmetric primitives, so a hash function and a hard to invert function. Concretely, we're suggesting shake and low MC. And these are both things that we at least don't know any way of attacking any better with a quantum computer than with a classical computer or more than a small amount better. It gives small keys, which is nice. Signatures are slightly larger, but still not too big and signing verification time is pretty moderate. And the other, the nice thing to emphasize about our approach is that it's a new approach. It gives sort of a diversity of design options and it also leads a lot of opportunities for optimization. And in particular, there's already been work that appeared at CCS this year called the JARO that looks at optimizing the way that that proof system works in the case where your circuits are bigger. So we might use that if we wanted to use shot three or shot two instead of low MC. And I'll stop there, but if you want any more information, you can ask me or all of our documentation and everything is up on this website. We have the time for one, two questions. If there are any, can you go to the mic? Hi, very nice. I have to say a very neat idea. I was actually surprised that the quantum random or a conversion is as such a small overhead compared to the regular one. I mean, that's, I think it's pretty cool. Can you give us an intuition on how you select the parameters for the UNRU transformation? Oh, sure. Yeah, we were actually surprised too at first, but so the issue is the way that the UNRU transformation works, it's sort of asks you to to commit to responses for all possible challenges, essentially, for many possible challenges. But the nice thing about here is we only have one, my example two, but really in the actual scheme, three possible challenges. So we can just sort of commit to all of those, all three of them in advance. So that's part of it. And then the other thing is that sort of the responses actually have a lot of overlap in the way the construction works in the way. So we sort of collapse the overlapped responses and just open the parts that you need for age-proof. So it ends up, yeah. Just because of the way that this proof system works, it ends up being a relatively small overhead. Can you also tell us how does it compare to Sphinx, for example, in terms of, because I've seen you did some comparison with other signature schemes in the TLS implementation. Oh yeah, so we actually just compared here with like a basic RSA scheme. The issue is that we haven't, don't know the sort of the latest version of Sphinx after the NIST competition and everything. So I don't know if I want to comment too much. I would guess it's probably still a little bit slower, but they've done a lot of optimization on theirs. And this is still brand new and not, there's a lot of room for optimization. But yeah, we haven't actually run the comparison with the latest versions. Thank you very much. Any other question? It's from upstairs. Okay, then let's think about this again.