 So good afternoon everyone. I'm very glad to tell you today about the work on post-quantum key agreement from both idea on generic lattices So my name is Valery Nikolayenko. I'm a PhD student at Stanford University For the most part, I will be focusing on two papers Which is a new hope that builds key agreement from ideal lattices and fraud out it builds on generic lattices So people that worked on new hope are Erdo Malkim, Leo Duka, Thomas Petlman and Peter Schwab and Frodo authors are Yopi Boss, Craig Costello, Leo Duka, Ilya Mironov, Mikhail Nerik, myself, Anantran Kunathan and Douglas Steadlow So we will focus here on protecting currently deployed and widely used cryptography against quantum computers So let's look at the main crypto primitives that are used in TLS. This is public key cryptography including key agreements and signatures And the examples of that would be RSA, Divi Helman and elliptic curve-based analogs then there is Symmetric encryption like IES 128 and hash functions like SHA-256, SHA-3 Unfortunately, quantum computers can break currently used public key crypto meaning there are efficient quantum algorithms that can break RSA, Divi Helman, Divi Helman and the family So they will need to be replaced and in fact NIST is calling for proposals Symmetric key cryptography will need larger keys and hash functions will need longer outputs and that's because of better quantum brute force attacks But we know that quantum computers cannot provide an exponential speed up for such algorithms Thus there is no reason to believe that other symmetric key crypto hash functions will need to be replaced So we're good with the existing primitives and that's why all effort needs to be concentrated on developing and deploying new public key cryptography So let's see how these primitives are used in TLS. Let me very quickly go over the protocol It will be a very high level overview. So the first part is the handshake where Two parties establish a shared symmetric key through a couple rounds of communication Such that any eavesdropper who is listening on this communication will have no idea about the established shared key So it goes as follows the client initiates the connection with the dummy hello message The server replies with a certificate chain and server's key exchange message Then the client sends his portion of the key exchange and they finish the handshake and at this point from two key exchange messages Both the client in the server compute the symmetric shared key K and Onward they will encrypt the rest of the communication using the symmetric shared key and some fast symmetric cipher like IAS for example Roughly the protocol can be subdivided into three parts The first part is where the server authenticates itself to the client The second part is where the client in the server agree on the key and the third part is where the traffic is encrypted using symmetric encryption So we don't worry about protecting the first part against quantum attacks And that's because suddenly in the future when quantum computers are around They will be able to impersonate the server, but this will not compromise the authenticity of today's connections To protect the last part of the symmetric encryption We just need to double the size of the key so that's easy But the protocol we use for key agreement cannot be broken by quantum computer meaning they're efficient quantum algorithms That can recover the symmetric shared key and decrypt the rest of the traffic It's here is an important point to understand if someone records these connections today and gets a quantum computer in the future Then they'll be able to go back and break into the connections that we do today And that's why we need to start moving towards quantum resistant crypto protocols now to protect our today's communications Perhaps most valuable of them from being decrypted by quantum computer in the future So this is the motivation and in this talk, I will discuss works that propose new algorithms for key agreements But first you may wonder should we really expect a quantum computer would ever be built So you'll hear more about that on Friday morning, I guess It's still a highly debatable question But suddenly there was a tremendous progress over the last decade. There are many ways to argue that you should indeed care Let me show you a few of them. So perhaps the most successful groups comes from University of Santa Barbara They're supported by Google and they predict a quantum computer capable of breaking today's keys in 15 years from today for a budget of about one billion dollars The next fact is in 2014 from revelations of Edward Snowden We learned that NSA for example is working building a quantum computer and a year ago They put a notice on their website urging us to move towards quantum secure cryptography and encouraging research in this direction So these two facts coming from an essay might be somewhat worrisome Overall, there is an ever-growing effort and a lot of investment in this area So probably we need to prepare quantum secure alternatives for our crypto keeping in mind that standardization will likely take a long time So now I'm ready to explain you the new proposals for new quantum secure key exchanges The most studied cryptographic assumption that is considered so far to be secure against quantum attacks is Lattice-based assumption called learning with errors This assumption is very easy to explain Essentially, it states that it's hard to find solutions to the system of linear equations if we add some small noise to this equation So let me explain it in pictures. If I give you a square matrix a and a vector a times x Then it's easy to find x From algebra one-on-one. We know that you can find x for example by Gaussian elimination But now instead of giving you the vector a times x I Beforehand I add some small vector e to it. So the elements of this factor e would be very small Then suddenly it's become it becomes really hard to find x and it's not only hard to find x It's also hard to tell whether the vector that I gave you was constructed this way or what they picked It completely and uniformly at random Okay, and this is the assumption. That's it. So for random matrix a small vectors x and e given the matrix a a times x plus e Is indistinguishable from random? So this assumption was introduced by regiff in 2005 And you may notice here that all operations are done modulo q and this will be important later on It's easy to show that the previous assumption can be generalized To the case when x is a matrix So we can blow up the dimension of x a little bit Frodo key exchange exchange is based on this problem It means that if you want to break into the Frodo key exchange, you'll have to break this assumption You might have heard a bit about related assumption called ring LWE It's essentially the same as general LWE except the matrices there instead of being uniformly around the matrices Will have additional structure Namely each row of the matrix will be cyclic shift of the row above and yet when you circle the value around you'll put a negative sign in front of it So enter cryptosystem If you heard of it also builds on lattices with additional structure of similar kind So this modified assumption was specifically introduced to give more efficient protocols in terms of communication and computation So you can see for example that we don't need to transfer the whole matrix to the other party We can just Transfer the first row and then the other party can reconstruct this matrix from the first row by applying this wrapping rule In fact the Computations can be done more efficiently as well. Essentially you can use number theoretic transform to do matrix matrix product So you can think of ring LWE as exact same assumption as general LWE Except we only consider cyclic matrices with this funny wrapping rule So if you remove the word cyclic from the slide you'll get the sum the previous assumption the general LWE and The new hope key agreement is based on this problem Which means if you want to break new hope key agreement you have to break into this problem. You have to build a distinguisher So I will now describe you the key agreement based on these assumptions Before let me briefly remind you how the Diffie-Hellman handshake works Because LWE based handshakes will be very similar and they want to put them side by side So in Diffie-Hellman the server picks random X and sends G to the X over to the client The client chooses random Y and sends G to the Y back to the server And now both the server and the client can compute G to the X Y So for example the server can take client can take client's message Raise it to the secret power X and get G to the XY and from this symmetric shared value They can derive a symmetric key So the Diffie-Hellman assumption in this case will have the following form if you're given G G to the X and G to the Y and that's exactly what the Eavesdropper sees What he observes the communication you can't distinguish G to the X Y from random in other words the adversary Cannot guess the shared key any better than by random guess So LWE based key agreement is very similar except instead of raising G to powers will be multiplying matrix A by other matrices So the server will pick random small matrices X and E and send A times X plus E to the client And remember that LWE assumption guarantees that this message will look uniformly random independent of X The client on his side will pick random matrices Small matrices Y and E prime and send Y times A plus E prime over to the server And notice that the client and the server are doing multiplications on different sides of the matrix Then the server can take client's message multiplied by X on the right and get Something that's close to Y times A times X Because when you multiply the noise vector E by X you multiply two small matrices you'll get something small So this is an approximate key agreement Um At this point both parties can discard least significant bits which will be different between them They can take the most significant bits and this will be the shared key But since we're working moduloq we will need to do something a bit more sophisticated than just taking the most significant bits Namely the client will have to send a few more bits over to the server to make sure that they both do the routing correctly And you can look in the papers for your explanation The important thing to keep in mind here is that the protocol has a non-zero probability of failure When the parties arrive at different keys in which case they need to abort and start all over again But when tuning the parameters we make sure that this probability is very low In fact, that can even be made zero But we can significantly shrink the communication if we allow for it to just be small which we recommend So the the assumption in this case will have the following form given the matrix a and two messages of the handshake The other server can only get the shared key by random guess So in the security proofs from the paper papers, we show that if one can break this then It can be used to break Outwe assumption. So if you're using uniformly random matrices here, you'll be able to break a learning Learning with errors if you're using cyclic matrices, you'll be able to break Ring learning with errors if you can break this So this is the handshake that you've just seen I just want to emphasize a few important points here So in practice instead of fixing a matrix a in a new standard or elsewhere And having the same matrix for all or multiple key exchanges We propose generating a fresh matrix for every key exchange This will defend us against pre-computation attacks on this matrix and will ensure that the matrix is not backdoored Since matrix a is rather big instead of sending it over the network We propose generating it from the seed So the server will sample uniformly random seed apply pseudo random generator to generate his matrix a He will send the seed over to the client and the client will do the same thing For security tasks, we will rely on our learning assumption plus the security of pseudo random generator The x and y secrets and the noise will be coming from distribution that will be a parameter in our system And will be an approximation to a discrete Gaussian A bit of history So the first lattice-based cryptosystem was and true introduced in 1996 with no reductions from hard problems So the lattices in 97 i-tide work gave a first public encryption system based on hardness of finding short short vectors and lattices In 2005 regif introduced the elderly assumption That i've just shown you and five years later lubashevsky, pikert and regif introduced the ring lw version Dina toll gave a key agreement protocol from both problems pikert improved the ring version of the protocol later both At all implemented this protocol selected concrete parameters integrated the protocol into the open ssl and measured the performance i came at all recently improved the parameters distributions and got better performance And for the paper gave a first lw base key agreement breaking the widely accepted presumption that only ring lw systems can be practical And further improvements were made google recently Uh introduced their new hope cypher suits as an option chrome canary And this was a proof of concept google confirmed that we can quickly move to lattice-based cypher suits might deneed a rise A few words about the assumptions before Right so elderly assumption was studied for more than a decade And is considered to be one of the best candidates for being both efficient and quantumly secure Elderly assumption has a very nice property It has what's called worst case to average case reductions What it means is that if there is an adversary that can break the assumption for a random matrix a or in other words can Break a key agreement on average Then this adversary can be used to break the assumption for any matrix Meaning you don't need to worry about hard or easy matrices You can just pick a matrix for your key exchange at random and you would be good And this is not the case for factoring or discrete logarithms Where you need to be really careful when picking your primes your finite groups or your elliptic curves and that's why they're fixed in standards Also, this assumption is something different from the two that we use today Which is hardness of factoring hardness of solving discrete logarithms And potentially this could be a third leg for a crypto to stand on As a side note, you might have heard that lattice based assumptions Give a very rich set of other crypto primitives like for example fully homomorphic encryption Attribute based encryption and even obfuscation So if we make lattice based key agreements efficient, we can start looking into other fun stuff So currently we have two candidate schemes for key agreements the Frodo based on lwe problem It uses uniformly random matrices with no additional structure and a new hope Which is based on ring lwe problem and uses cyclic matrices Currently there is no reason to believe that ring lwe is weaker than lwe We only know exponential time algorithms for both problems So far I didn't really talk about lattices. So here they come It was shown that lwe problem is as hard as gappas vp And gappas vp asks to approximate the length of the shortest factor in an n-dimensional lattice within some polynomial Approximation factor gamma People were working this gappas vp problem For about a century starting with the work of gauss, hermite and minkowski So we have strong confidence that lwe problem is hard Unfortunately, we don't have any strong results of this kind for ring lwe By analogy, we knew that ring lwe is as hard as ideal svp Which asks to find a short vector in an ideal lattice, but recently this problem became nearly broken Cramer duke and waslowski discovered a polynomial term algorithm for a sub exponential gamma Though this result does not affect the security of ring lwe assumption Our confidence in its connection to lattice problems was kind of shaken So perhaps you should be careful when using rings And frodo Gives an alternative protocol that gets rid of ring structure What it means and that's my own opinion is that perhaps frodo key agreement is a more secure one or more conservative in terms of security But you'll see later that the new hope is a lot more efficient So the most important question when designing real world crypto systems, of course, is how do we choose parameters? So our key agreement essentially has is parameterized by three numbers the modulus So all operations will be done module q the dimension n. So our matrix is n by n And the distribution And we'll use approximations to gaussians because gaussians are used to show reductions to the lattice problems So we can search the space of parameters to find the best set that minimizes communication Computation and satisfies the following three requirements So based on state of the art and crypto analysis both classic and quantum attacks should run in time more than two to the one twenty eight To have at least one twenty eight bits of quantum security The probability of failure should be small and we should have enough material at the end to be able to extract a 256 bits shared key to be used in the symmetric encryption of the payload So here on the slide you can see the parameters that they recommended for frodo and the new hope So let's look at the frodo parameters first The modulus q is two to the 15 Which means each number very conveniently fits into two bytes integer And we don't need to worry about more operations essentially get them for free And as in the order of 800 the failure probability is less than once in a billion So in practice the connections get dropped at a significantly higher rate due to various reasons So we perceive this probability Failure probability has been unnoticeable The quantum security for the set based on state of the art is 130 bits And the parameters for frodo were chosen using search scripts because frodo protocol essentially allows for arbitrary q and n The new hope parameters are very similar in size So q is 13 bits prime and n is about a thousand To utilize number theoretic transform based operations the parameters for the new hope have to be picked in a very specific way Namely q should be a prime and should be a power of two and they should satisfy some relation So the parameters for new hope were rather handcrafted The failure probability is also very small and the security for new hope is 255 bits And note that the security levels are different and the new hope is more conservative in this regard But in fact we only need to have 128 bits of quantum security A few more words about the noise distributions. So how exactly do approximate gaussians? By a tall reference at the bottom showed how to substitute a gaussian with another distribution in the lattice based proofs Using renida virgins as the measure of security loss So new hope used the technique to substitute a gaussian for binomial distribution, which is a lot more simpler and efficient to sample from The distribution recommended by new hope requires 32 random bits So you just take these bits and add them up. That's it. And it's naturally constant time in frodo Though through search scripts we found an optimal discrete distribution that would minimize the renida virgins enhance the security loss and will require Few random bits and represent the distribution in frodo with the lookup table So this is on the slide the recommended distribution for frodo It needs only 12 random bits to draw a sample and will need to scan a table of 14 bytes to be constant time Uh, so both protocols have constant time implementations Written in pure C the implementations are based on the okius project that daglas one of the authors and frodo was working on The okius project aggregates the existing quantum resistant implementation enables their fast Comparison and prototyping within open SSL So check it out The protocols were benchmarked against rsa ecdhe and all available implementations for protocols that are claimed to be quantum resistant so both suits are integrated into open SSL where new cipher suits are introduced such as the ones that Do pure ldv handshake and the ones that combine the ldv handshake with Divi helman and i'll explain why this is a good idea in just a second but So let's look at the standalone performance for the protocols. So the protocols i'm Focusing on the highlighted in red We can compare compare against rsa and ecdhe which are the most widely used protocols for handshakes on the internet today Then follows the largest based implementations, which are entro new hope and frodo and two other alternatives, which is Sidh based this is a quantum secure alternative to divi helman and mccleese, which is a code based key exchange And you see that sidh is too slow to be competitive on although maybe michael will Update us on that and mccleese generates too much traffic. So let's concentrate on what's important And you may notice somewhat different security levels Since ecdhe is taking lead today, we compare ourselves primarily against it So keep in mind that the numbers for ecdhe shown here are for the unoptimized p256 curve Lately the default in open ssl was changed to use the optimized lat krasnovs implementation, which is faster So just keep this in mind You may see that the frodo protocol is about two times slower and generates about eight times more traffic than ecdhe If we take into account the certificate, which is for example three kilobytes for google.com The new hope and entro are doing Better new hope is actually faster than unoptimized ecdhe and generates just two times more traffic Both works demonstrate quite surprisingly that key agreements from lattices Are very competitive and you see that they take just milliseconds and the traffic is just in the order of tens of kilobytes So everything is very small and fast It's important to recognize that for the next few years of deployment of new cipher suites They're likely to be used in hybrid modes So we should suggest using both post quantum key agreement protocols in conjunction with traditional ones For example pair up ecdhe and new hope or ecdhe with frodo Then in order to break the protocol the adversary will need to break both assumptions separately The new hope hybrid was recently introduced in the test version of chrome and for a few months you could do Key agreements using lattice based suits Of course for hybrid cipher suits the difference between lattice based protocols becomes less pronounced So perhaps standalone standalone numbers do not reflect the actual cost of switching the cipher suits as well What number really matters is how many more servers will you need to buy to serve your traffic? And to understand that we need to look at the throughput number of connections per second So we're in a server that under heavy load measured throughput with different protocols varying the size of the payload data from one byte to 100 kilobytes And you can see the result on this graph So we compare three lattice cipher suits new hope entrant frodo in hybrid modes and ecdhe And you can see when serving very small pages the difference between frodo and new hope is 1.5 And not surprisingly drops down when we increase the size of the page It's already 1.2 for 100 kilobyte payloads To summarize there exist already key agreements from lwe and rena lwe that are ready to be used or ready to be attacked whichever you prefer Everything has constant time limitations and both algorithms are integrated into open ssl Both papers give very nice alternatives to noise sampling which are constant time Communication remains the main bottleneck in all these protocols Both papers propose various interesting tricks that allow to shrink the communication if you come up with something Some more tricks you would be very welcome All code is open source including the scripts for finding parameters scripts for estimating the cost of known attacks Scripts for choosing error distributions The open ssl with integrated ciphers is also available open source So feel free to check it out And you can find many more benchmarking numbers obtained using the ocus framework in the frodo paper So that's all I have. Thank you Great. Thanks, Lara. So we have time for some questions So actually I'll ask a question. So Hugo brought up this issue that since we're building security for 30 years from now We might 128 bit security might not be enough So do you have a sense of like if you had to go to say, you know, 150 bit security, how would frodo behave in those settings? Well, so our security estimates are very very conservative So it's likely that same parameters will withstand attacks in 50 years from today But that's a valid point Also, the question is how fast will the quantum computers be and that's kind of hard to predict So but that's a very good question. So you say 128 bit security, but really it's probably more than 128 bits Yeah, it's probably more I have kind of a related question um So this presentation in particular but the whole session on quantum cryptography seems to have this assumption Or be based on the notion that we should already be doing something post quantum in the real world so by that And I'm kind of mean I should immediately today basically go start, you know, trying to Create my new super secret system using the stuff that's in open SSL for example already And all the other implementations that are open source And yet I'm seeing that, you know, you mentioned the math and the paper that shook your faith in ring lwe What do we recommend here? I mean, you know, should I actually just increase my key sizes for doing rsa And assume that we won't build a big enough quantum computer In the near future As an immediate real world thing or is it actually realistic that I should start going doing something here with these new protocols Just your opinion I know My I see Dan I see Dan shaking his head in the wings Well, my opinion is that you should definitely start thinking about it at least um, maybe not immediately use it but start researching this direction and then we'll see what the progress is um the current progress and they From physicists who are building quantum computer. So I don't just uh things to think about I have to add to that. So I mean the NIST process is just starting I would say the advice is do nothing just wait for the NIST process to end And then do whatever, uh, whatever is agreed upon Right plan plan to do nothing now plan to do something and wait for the NIST process to run its course Great. I can tell my manager that that works People love this advice do nothing people love this Any other any other questions? All right. Thanks Lara. This is really good