 The next talk is called the year in post-quantum crypto by Tanja Lange, she's researching in Eindhoven and Daniel Bernstein researching at the University of Chicago. They both had the first ever conference on post-quantum cryptography in 2006 and they both contribute to the lip-salt crypto robbery. Their talk is going to be a summary of the NIST post-quantum crypto competition and they're going to provide an overview of problems of post-quantum cryptography and hopefully just how to integrate post-quantum crypto in existing software. Please welcome them with a huge round of applause. All right well thank you first of all this title word post-quantum cryptography what it means we're talking about cryptography where we're designing the systems under the assumption that our attacker not the user just the attacker has a large quantum computer. Also to remind you from three years ago when we showed you the attacker we're talking about attackers. All right so we've seen like over the last few years 2015 say middle August there was a big announcement by the NSA saying oh yeah actually the world should care about post-quantum cryptography we need more research. They finally wake up to it and actually they had a nice roll out effect that pretty much every agency just highlighted three of them here so UK and Netherlands and the NSA themselves made some statements about the urgency of rolling out post-quantum that normal cryptography that we're currently using so elliptic curves and RSA and Diffie Harman find fields will be broken as soon as a quantum computer and the NIST which is the National Institute for Standards and Technology so it's an institute in the US which is working on standards said okay we're gonna call for a sanitization project for post-quantum cryptography so yes it's a US institution but it has in cryptography a mostly good history they have been running competitions which gave us AS they've been running competitions which gave us SHA-3 and they also without a competition gave us dual EC so competitions with NIST good thing everything else well judge for yourself this sounds like a great story it's also a little bit disappointing when you look at she where it started so back in 2003 Dan here coined the term post-quantum cryptography and then they've been running around for ten years going like the end is now please invest in post-quantum cryptography do something just that ten years later the NSA says oh my god we have to do something one computers are calling so it's a little bit yeah like told you so not a great story but all right so what happened with this competition well after NIST said hey everybody send us some proposals for post-quantum crypto to standardize a lot of people did so they said actually more than 80 submissions came in and some of them were incomplete like one of the requirements was include software and whatever reasons they said some of the submissions are not complete but they posted 69 complete submissions the submission teams include 260 people from around the world all sorts of academia industry maybe even some government people there's lots and lots of names and we're not going to go through like what all these things are but we are going to say something 60 minutes oh that's true okay so big quake this is a code-based system no we're gonna give you some idea of like what the big picture is of how well these things have held up so these were all posted by NIST on the 21st of December last year and some of you who saw our lattice hacks talk last year I remember that this list already there was some damage to it by the time of our talk on the 28th of December 2017 by the end of the year there were eight submissions that had attacks at different levels of severity I should explain the color coding here the stuff that's in brown is less security than claimed which could be for instance they claimed something was it would take two to the 128 operations to break and somebody says no it's two to the hundred or two to the eighty well does that really matter or maybe a different direction is somebody says this is a system where you can reuse the key any number of times which we expect for for normal crypto systems that you can publish your key and people can keep sending you messages or you can sign many times under some keys but sometimes people claimed they had that feature and they turned out to be wrong that those were attackable like healer five in this list which is not necessarily kicking them out of the competition because NIST said you can have a system with one-time keys that can be useful for some applications the things in red are everything that they proposed all the concrete parameters are broken the underlines are those where there's attack scripts python script stage script whatever it takes to demonstrate that yeah look here's your secret key or decrypting something so that was the end of 2017 how about now well okay how lot let's extrapolate three days from now probably the situation is at least the following at least by 28th of December 2018 22 submissions have attacks so it's about a third of the 69 submissions and you can see more well most 13 of those out of the 22 are read mostly with underlines I think some of this is from us a lot from other people and I think we did well early on establishing that people should put out scripts to demonstrate that yeah you really can break these things so again the underlines are demonstrated attacks some of the submitters have withdrawn the broken submissions some of them have not all right so when you look at this as then said we're not going to go through explaining those but let me categorize these so when we look at the ones which are not completely smashed and broken we can put them into boxes we can say hey what is the underlying mathematical problem what do we hope is something the attacker has to do in order to break the crypto system and then there's one category of using error creating codes which is then used for either building an encryption system or signature scheme hash functions you could only build signature schemes from I certainly base crypto for the competition we're only seeing an encryption system and honestly all the signatures we have seen so far are pretty inefficient that is as we see verbose that is something we talked about last year so if you want to get a full blown let us explanation go for last year's talk and then finally there's something using systems in many variables many equations and we get signatures and encryptions from that so that's one way of saying well these are good things for post one it does not mean that everything which is in these boxes is secure you can still design a system which somehow relates to this math problem but the attacker can do away around some of those systems like was I get back to later is a code base system code bases on the list first one up there so should be totally secure and yet it's one of the red underlying systems so just being in one of category categories does not mean it's secure but it's at least some hopeful helpful thing to box things there are other ways of describing why this is the situation we have now as an example of this kind of categorization sometimes people might say oh lattice based cryptography is is the safe thing to do all that red was people who were not using lattice based cryptography and everything lattice based is good everything else is scary but if you look at the lattice based systems it's not all black there's some red stuff here compact LWE that one is broken with a script we're quite sure it's broken there's some other with some damage DRS he'll five so it's not that the lattice based submissions have held up perfectly and also it's not just that these are isolated mistakes that people have made there's ongoing research which is making better and better lattice attacks for instance some papers from last month and this month from the three authors listed there are talking about lattice based systems being broken by decryption failures now this phenomenon most of the lattice submissions have occasional decryption failures once every two to the sixty four ciphertext maybe you won't be able to decrypt which might sound like oh it's no big deal it's it's maybe occasionally you might have some user has that happen and I'll just the browser will reconnect whatever it takes it's not a significant failure rate except that those failures if the attacker is trying to decrypt a particular ciphertext or maybe even attack a bigger out somebody secret key they can usually get that information out of watching the pattern of decryption failures if decryption failures happen often enough and these papers are saying for two different reasons decryption failures are actually happening more often maybe much more often than people claim so that's kind of scary all right one explanation which I of course like very much I've been running a European protocol pick a crypto and just use everything in our portfolio it's good right actually it's looking better than the argument saying I'll use everything letter space we have one which is slightly scratch but everything else is good right yeah except well there's another explanation than just well whoever these PQ crypto project people obviously the right people putting together a great portfolio there's another possibility not saying this is right but you could imagine that crypt analysts who are deciding what to do out of that huge pile of 69 submissions maybe they thought the people who were in this project doing this stuff for some number of years are maybe not the top targets maybe you should look at the other 49 submissions maybe you should look at the submissions with the specification written in Microsoft Word probably more likely to be broken maybe there's other ways that you can decide how to know it works to Microsoft people yeah that did work coincidentally yeah so the thing to keep in mind is that there's a huge pile of submissions more than a million words in these 69 submission documents and this is like a word in English is usually a kind of imprecise thing reviewing that this is I would say more effort than reviewing a million lines of code this is a lot of stuff a lot of junk to go through there's a real flood there's too much for us to all the people who care about attacking systems to actually go through everything so people are making a selection and maybe they just haven't bothered looking at these PQ crypto submissions so if you if you want to actually have security review it's really important to keep this denial of service effect in mind that the flood of submissions has to be small enough that it can be handled by the number of people who are looking at these submissions and evaluating the security so call for help please join please break don't break ours now one thing which in this audience is a little funny but in a normal academic conference we're talking to like 40 or 100 people we like we actually have a lot of people now being interested in post quantum photography this was this year's conference on this when dad and I started this in 2006 we're looking at the audience of 40 this is 350 people well they would probably fit in here but well for academics this is big there's a huge interest right now this was the combined conference together with the NIST workshop and so people are looking so that's the good news there's more than a lot of service going on I mentioned rock was already as one which was broken but not withdrawn broken actually three different ways where our last message to them was basically abandon all hope this is not gonna work but they keep on hoping if I zoom in there they have they're on the way to publishing new parameters when he was reading it out actually at the conference he was saying the keys and signatures has maybe several dozens of terabytes well there is some effect that we're most likely not gonna break it so easily because when you try to download several terabytes of data it might not reconnect so NIST is certainly aware of the fact that there's just this denial of service on on security analysis and one of the ways that they tried to simplify the picture is said all right after the the conference they put out this call saying all right if you have similar submissions see if you can merge them and then hopefully you get something which is you know gonna be a combined submission that's that's easier for us to analyze than these two separate submissions in the first place and they said this is an interesting little guarantee they said NIST will accept a merged submission to the second round if either of the submissions being merged would have been accepted so you can imagine kind of attacking this if there's a strong submission and you have a weak submission like the strong one they surely have to accept that and then you merge your weak submission into the strong one if you can somehow convince the other team to do that then your weak submission is also gonna get into the the second round but NIST said well all right you should only merge submissions that are similar and the merge submission should be like a combination of the the two original submissions so that sounds kind of reasonable I example while the first announcement of a merger I don't think NIST said that you have to publicly announce but he'll have five merged with round two and of course this was after the he'll have five attacking and part of the merger was fixing the the issue in that attack and they formed round five and they said round five this result of the merge is a leading lattice-based candidate in terms of security bandwidth and CPU performance so three weeks later the security turned out to be kind of a problem Mike Hamburg said that here's a very strong reason to believe that decryption failures are much much more likely than what they claimed and as a result of that and they they accepted the argument and said yeah oops as a result of that like I mentioned before decryption failures are something that attackers can use to break security and it's not that that a full attack was implemented but it's pretty clear that the attack would work and this is also an interesting attack because the mergers were supposed to be just taking like take the best features of your two submissions that you're merging but this was a mistake the vulnerability that Hamburg exploited was a mistake that was not in either of the submissions that were being put together so there's some process of of break and fix and merge making more mistakes which get broken and then fix and well what was the fix they said oh here's a proposed fix they're looking at the security proof adjustments there will be a round five real proposal their actual merge will be in the future I think now they have round five a round five B and round five C where a is broken B is questionable C is still not defined what is a security proof what is a security proof mean if you have a security proof previously that they were adjusting and the security proof is for something that is not actually secure very strange more merger announcements post quantum RSA encryption and post quantum RSA signatures merged to form post quantum RSA saying that that is a leading candidate in terms of depth of security analysis amount of network traffic and flexibility for people not familiar with post quantum RSA this means using RSA with gigabyte or terabyte keys which is leading in the amount of network traffic we want the internet to have as much cryptography as possible use more bandwidths yeah remember you have if you're measuring the amount of encrypted data on your network this increases that amount more mergers more mergers more mergers some of these are kind of gluing submissions together in a way that does not simplify the the security analysis but this last one is a good example I would say of a merge entry HRSS and entry encrypt two of the entry-based submissions they actually put some thought into what they wanted to keep and what they wanted to throw away and so the analyzing the merge is easier than analyzing the both of the initial submitters after the November deadline for mergers NIST said they will announce the second round candidates maybe it'll be probably less than 30 some hints it'll be 25 or even 20 candidates and maybe that's starting to get down to a small enough flood that we can start seriously analyzing what's left they're gonna announce that I think on exactly January 10th they're scheduled to do that and then a week after this announcement they said well the government might be shutting down and in that case we're not allowed to do any work so we're gonna be completely silent in case of a shutdown it's important in the US government during shutdowns there's a definition of essential personnel like NSA and non-essential personnel like people protecting us against NSA and only the essential personnel are allowed to do work you know what else is not allowed to do work the back-end database for NIST web pages which might sound a little bit weird although maybe they're paying Oracle for the back-end database and they have to turn off the payment to Oracle I don't know what's going on here but the if you look for the competition information you can't find it on their web pages anymore we're not quite sure how long the shutdown is going to last of course there are some people who say that this is not a problem because we can figure out without this competition how to protect ourselves against quantum computers all right now that we have aliens already our quantum computer is actually coming big question we don't know what we can monitor is progress in quantum computing and just mid of December there was a news item from IQ which is a small startup company announcing their largest ever quantum computer based on iron traps all the other quantum computers we've seen so far which are of this size like 40 to 70 they were using superconducting qubits so it is again a race between different technologies but both of them are advancing and there's some more which are growing so it looks like it's coming whenever I see that picture like this I'm reminded of a nice joke from a colleague of mine Stephen Galbraith can you distinguish yep so with all these news coming up the National Academy of Sciences in the US has been interviewing people for the last about year and a half so they got people in physics engineering building quantum computers people doing quantum algorithms people doing quantum vectoring recording codes and putting all of this together into a report which just came out where they look into well what are the progress and what this prospect so the first part of key findings the first one is the good news saying don't panic we do not expect that anything is going to happen in the next 10 years which will threaten RSA 2048 or similar where I assume what they mean well elliptic curves discrete log and find fields so that's a good news but they don't have don't have just one key finding it actually goes on to two three ten by the time they reach 10 I think this is panic so they say as well takes forever to roll out these things it's the hazard of such a machine is high enough and then the development of post-quantum cryptography is critical for minimizing the chance of a potential security and privacy disaster these are strong words from the National Academy of Sciences so okay can we deploy post-quantum cryptography is it deployable well some people would say we've already deployed it but maybe that doesn't include the NIST submission so let's look at the deployability of the NIST submissions the main thing that matters for deployment in most applications the main problem for post-quantum cryptography is the sizes so here's a picture of the night sky over life now this is a picture of on the horizontal axis is the size of your public key for a bunch of signature systems not all of the signature systems for instance walnut dsa which is broken with a script in the first five versions post-quantum RSA is missing yeah post-quantum RSA is also emitted from this side of off yeah that would be I'm one of the designers of post-quantum RSA by the way I'm not it's the future of cryptography that was good I yeah so so what you can see here is for example this GUI submission this has you can get your your verticals the signature size down to just 32 bytes or 35 bytes or something like that but you need this is something like 400,000 bytes in your public key and then there's three different dots for GUI those are three different security levels and maybe the all the different submissions here are maybe not exactly comparable in the security levels it should be a three-dimensional graph if we measure everything properly by exactly how secure it is which of course we're not quite sure about until there's been enough security analysis you can see various different trade-offs are possible none of which are down where we want to be with things like under a hundred bytes for your public key and under a hundred bytes for your signature which is what we're used to right now that's what elliptic curve crypto gives us is signature sizes and public key sizes which are both below 100 bytes and that's something you can fit into your applications much more easily than say 100,000 byte public keys or 10,000 byte signatures there's various different trade-offs and maybe your application can handle some of that but there's nothing that's just really small which is what we're used to right now another more complicated graph this is for encryption and showing more candidates well there are more encryption submissions this is still not all of the encryption submissions but representative sample and you can see that while there's still no really great sizes here the best in terms of sizes is psych super singular isogyny key exchange which is something like 400 a little less than 400 bytes for the public key and for the ciphertext and then it starts getting bigger from there and you get on this graph you get things up to megabyte or bigger you can get a little below three four hundred bytes you can get down to a hundred or so bytes for the ciphertext as long as you are willing to accept a public key that's much much bigger with some of these code based systems and then just to zoom in on some of the smaller ones you can start seeing where some different candidates are this is everything which is public key and ciphertext below 1280 bytes and again you see psych down there a little below 400 bytes and then some other possibilities but well what are the security levels of these things could all of these be broken there's not actually that many of them how many of these have actually been studied it's kind of scary and again much bigger sizes than we're used to in cryptography so yes size does matter so Google and cloud for this year in April we're saying well we don't really know what the outcome of this competition is gonna be but we have some categories of different crypto systems and let's just send dummy packets of data of respective sizes and see what happens when we do this on the internet now this is Google and Cloudflare so they're doing this on the Chrome browser for connections that go through Cloudflare so they could actually see from when it came where it ended where they came back where they dropped then mentioned psych so this is the one category of super single isogenes those are just 400 bytes and that was pretty much fine so when you look at the first column you have a small latency increase you also see the inaccuracy there's a minus 0.2 so the numbers are mostly correct then there is in the lattices this was a zoom in what then I was showing those are mostly something in the category of structured lattices those are around the MTU so hundred thousand two hundred something bytes and those mostly worked some small increases about under 20 percent so this is also something we feel like yes you could actually deploy this on the internet then a different category still within lattices are unstructured lattice those would come with 10 kilobytes of data for the public key and there they just notice that too many pages including like top pages on the Lexa 500 such as linked in we're just dropping they tried funny enough 9999 were fewer pages dropping so 10k was worse than 9999 but even then linkedin was still dropping and so they decreased it to a third as basic of a placeholder measured with these 3,300 bytes and then scaled up by a factor of three now those increases in the latency is what they said well this is not acceptable so then for the next experiments they were only looking at I saw Janice and structured lattices I saw Janice are also special not just being the smallest but also being the slowest okay not absolutely the slowest but it's the only system with the speed is much more an issue than the size and so despite Google having quite a few computers they were saying we can't actually use I saw Janice for the speed reasons size would be awesome but speed not so sure it's also a relatively recent system is just from 2012 so maybe also it's a security question so just now in December they were announcing they actually building a new experiment they announced which candidate they have chosen and to HR SS which then just mentioned was also one of the recent mergers and so this is one of the structured lattices systems designers our address hosing just reinefeld Peter Schwab and young scan great score for Eindhoven team current professor former student and then some collaborators and so they're now building a system which is a combined elliptic curve so that's the combined EC that's combined elliptic curves and post one this is the second round running and so this will come to some internet browser near you soon another nice result my missus not the only thing up there the ITF this year finished standardizing XMS as which is a hash based signature system it's been in the making down there you see the timeline it's been making for three years it's not really fast but it's also the first of its kind this was the first time that ITF has published a post quantum rfc request for comments but that's basically their standards and so there's a lot of boilerplate tax which was developed in this thing in the process of making the standard which is dealing with yeah post quantum quantum attacks and so on how should handle it XMS as is interesting it's not one of the new submissions because it doesn't satisfy the normal thing that you learn in primary school what a signature should be doing signature you expect you have a public key you use it to sign something and then you have a signature XMS as you have a public key you have a state you get a message you sign it and you update your state and you ever forget to update your state you will lose security so it's something which is not as cool as a normal signature scheme but there are also many applications where you actually know how many signatures you've made I mean if you're doing operating system updates you better know how often you got your key out of the drawer and used it so it is not impossible to use but it might not be exactly what you want for your applications good thing about XMS is still if you can count them the sizes much smaller than the signature systems that we were looking at before another size advantage size advance is something called glow stick I should explain the name this is starting from a lattice submission called saber which is one of the unbroken ones and saber has a big version called fire saber high security level scaled up it also has a small version called lightsaber and this glow stick is the even smaller version it's like let's scale it down as far as we can get that it's not quite broken and there's various technical details and it hasn't been broken in the month since it's been proposed that six months or so and it is interesting I mean it's something which is a good challenge it's nice to have these scaled down problems so we can try different ways of attacking these things and people who like breaking stuff it's good to have the simpler systems to to practice doing attacks and it gives you some insight into what could work against the larger systems. All right so since we're coming to funny names oh no since we're coming to sizes why do we still care about big sizes I mean people are scaling things down Google says oh we don't like big sizes so why do people say post-con systems are bigger and we still care about it so one of the reasons is well highlighting McLeese here which is our submissions in this competition these have had a lot of analysis so classic McLeese is based on the system from 1978 basically unchanged except for some changes where we can really prove this is the same as that it has one of the shorter cybertext just 200 bytes so that's actually kind of tolerable on the internet but a megabyte of key. Heat generation is also pretty slow but then the well it's nowadays called encapsulation decapsulation because all you want is your AS key you don't want to actually encrypt the message but basic thing of encryption and decryption of that so nice system very good history and security pretty fast pretty small except for the public keys it's like grandma why do you have so big case why are these keys so big so one thing is it's like a two-dimensional key we have this big matrix there what you see on the left is an identity matrix this thing has about 7000 columns it's like pretty long it's only for the height is n minus k which is just like 1500 so it's really long and stretched and so all of this part on your right hand side that part is random and you have to send that you can of course everybody remembers what an identity matrix looks like you can forget about this one but this one you have to send because the encryption works by the sender thinking which of those around 7000 columns you want to pick and then just X orring them up and for doing that well you need to have this big matrix and if you calculate well 1547 times 5413 that's the part of the right matrix you're getting to this one megabyte size now what are the issues with having big keys it's bandwidth but honestly when you download a pictures also megabytes over it might not be so bad I mean if you're in the German trains then you will hate it but normally else in the world or on your mobile it's fine Google was saying we actually excluded some of this and so they did an experiment with classic McLeese the largest the look that was 10,000 kilobytes and even then some dropped and they said well some are too large to be viable within TLS so they just said well we don't do this but then again you have a secure system you can just also design a secure protocol to go with it we don't need to stick with TLS but there's a real problem with having a megabyte of a megabyte of key because if your protocol assumes that the client generates this one megabyte and then just throws it at the server and the server has to accept one megabyte from every single client that is throwing a megabyte at it and then has to do something with it well this is really an invitation for the Nile of Service attack because you're allocating memory on the server for doing these operations the operations are pretty fast it's just exploring zeros and ones but you have to allocate one megabyte for each client and so this is a problem no matter what protocol we designed we have to deal with the possibility of the Nile of Service attacks or avoid them so can servers avoid storing these big keys I want to X all these columns so one of the first invitation small devices was saying well I'm a very small device but I can pick those positions and then outside world please be nice to me and spoon feed me one column at a time so the small device memorizes 1500 bits and then gets an X column if it was selected it X or sit in if it wasn't selected while it keeps the intermediate state so this works and at the end you output the normal ciphertext but what we have here is we're operating in a friendly environment where we do not expect this outside world to do something nasty to us and also we have some memory now we put this on the real internet and we don't want to have any state so we cannot memorize these 1500 because well we don't know when the next column is going to come so we output it send this back to this client that's not gonna work when you tell the client oh this is my current result then the client gets the next color at the server gets an X column maybe exhaust it and maybe not send it back to client anybody who's watching this traffic could see whether there was a change or there was no change so this is not a way of dealing with it so what then I've been busy with and well I put 2018 with a question mark we still have like three days right so we're working on a system called Mac tiny tiny because it's made for tiny web servers where we assume that this web server is not allocating any per client storage any per client state and so we again work with spoon feeding things but we're making sure that everything that the server gets and sends out is encrypted isn't authenticated there is some stuff to avoid replay attacks so that somebody can't just say oh what if I change the column here or there so all of these things are encrypted and what we do is we use properties of doing these sums and pieces by chunking up this big matrix into chewable pieces that are small enough to fit in one empty you and still have some space for some cookies so this is similar to normal users of cookies this is a cookie encrypted to the server sent to the client a client you handle the storage and then the client sends the next piece sends the old cookie now the cookie is encrypted but the way that the key is handled is the same for all clients so there's no per client storage of any keys it's a symmetric key it's pretty small so that's the one thing that the server remembers and then it gets a packet it from this cookie part recovers all the like which columns to pick what's intermediate result and then does some computation sends it back so the result of this is that we need several round trips but there's absolutely no per client state on the server of course you could say well there was still all that bandwidth and what if you do have bandwidth problems but some people say that we're familiar with sending a lot of data around so that's really not a big deal something else that could interfere with deployment is patents now Tonya mentioned before that classic McLeese does not have patents but what if somebody says I just don't want to handle the megabyte or for whatever reason people want something smaller or there's signature questions well we have a lot of information about some systems which are patented the 18 systems shown here because NIST had as one of the rules for the competition that you had to deliver statements to them signed by the every member of the submission team saying either we do not have patents patent applications on our submission or here's the patents patent applications here's their numbers and so okay as a result of that in NIST after checking they had a complete pile of statements put them online so now we know that these are exactly the 18 submissions where the submission teams claim patents on their submissions including for example compact LWE and DME and well not DSA which are rapidly broken by scripts that are online I and RLC you can that one has half of the parameters are broken there's another half which are not broken it's not that the patented submissions are somehow better than the rest of the submissions but well for some reason people think they can make money off of patents and maybe they're not actually so wrong because you can't just throw away these 18 submissions and say that's the end of it problem is that there's some patents which cover more submissions now NIST does not require the submitters to say that I here's which patents are which submissions are covered by my patent I the submitters are only required to say something about their own submissions and also NIST has no way to say anything about whatever random patent trolls that are out there that have not submitted anything they can't impose any rules on that so I mean of course you can try doing patent searches but you won't necessarily find things for instance this patent nobody noticed until it was revealed in well by a member of some submission teams this patent was issued in 2015 at the top there which might make you think oh something was published before 2015 it would be okay and some submissions were published earlier but what's important is the date down at the bottom here which is the priority date of February 18th 2010 if you look on Google patents one good thing is they put the priority date pretty far up where you can easily see it what this means is that in order to be prior art for this patent well you have to check what exactly they filed in 2010 they might have made later changes but the 2010 thing assuming that has all the same stuff as the patent which it's possible to find out I this anything that was published after 2010 is not prior art for this now what's really scary about this patent and I hope that really soon I'm gonna have analysis online of which submissions are covered by which patents of all the patents I've seen this one is by far the scariest because this one covers a whole bunch of submissions this one covers basically every submission which is using what's called the LPR crypto system ring LWE lattice-based crypto systems this is a very popular type of lattice-based crypto system which was published by LPNR Lubyshevsky-Pikert and Regev in May 2010 which is after this patent application was filed now there was a talk in April which had the same stuff from LPR and it seems like there might even have been a talk in January from LPR but they didn't put the slides online and then well it starts getting into interesting questions of patent law this looks like a very strong patent covering a whole lot of submissions and there's more cases there's a whole company called Isara that is specializing in planting landmines patent landmines around things that other people are doing sometimes on things that other people have already published and then you get a court fight about it this is going to be a problem it's something we really have to watch out for is what is patented and again I hope to be sometimes soon done with some patent analysis of course some people would say that we don't have to worry about patents as long as we find something that we can deploy that somebody tries deploying it and they don't get sued not sure that's gonna be deployed anytime soon I mean 35 out of 3000 okay all right funny names I said so what do you see here can anybody read phonetics yeah see side okay so C side now C side is what you really always wanted C side is an efficient post one commutative group action did you know that you wanted a commutative group action actually you did so what all people asked me is like I'm using different how month these days what can you give me in the post one world and then it depends a lot on how you define different how month some features that we come to use from different how month are that well you publish a public key I publish a public key other people publish public keys and we can reuse them kind of nice also we don't have to talk to each other we can just look up the other public key on the film book have a shared key and start using that one and if I send you something encrypted with our shared some public keys like combined thing of this then you will be able to decrypt this there's some nice other features you can blind things you can like take your G to the a and then multiply compute in the exponent times are so put some blind factors there and there is no difference whether I'm the initiator I'm the responder in this we don't have this anywhere else in post one cryptography all the systems that you see for this submissions make a difference between are you the sender are you the responder so this is the first efficient post quantum well different how month like thing which well by fancy math called commutative group action so if you're a user you don't want to know all the details and I'm not gonna give an entire talk about this unless maybe next year what you see exposed to you is just one single fine field element so there's some fixed prime that all the people in the system know and then everybody's public key is just one single field element so Alice computes her field element Bob computes his field element they post these somewhere and then sometime later years later maybe con computers build they find each other they compute their shared public key they combine the shed the public keys into the shed secret key sorry and then they have the shed secret now a little bit of the math behind this a actually appears in some form there so this is a one of the elliptic curves have been talking about in gosh when was this 2013 or so no 14 at least so there's a y square equals x cubed and then a this public key a x square plus x and then what the computation is to go from one key to another key is using an isogenic same isogenic kind of thing that you heard in psych before it's a math object which just means you move from one elliptic curve to another elliptic curve if somebody tells you to implement this what you need to get doing is well take this prime P compute modular P for addition multiplications and divisions out of those you for instance build the curve operations and then some more operations which compute an isogenic but all of those are just combined from those things so there's nothing particularly scary behind it except for well we came up with this thing in January 2018 at this lovely beach was great there but please don't use it yet experiment with it all you want but this has not had enough analysis but another reason why you might want this is security key sizes and so on so so where are we looking at first of all how many keys are there so how big do you have to look at this P when you have fixed your prime P say n bits then there are squared of P so to the end of a two many such curves so these are the numbers of public keys and then similar to how the elliptic curve discrete log kind of meet in the middle attacks work so basically smart Bruce first search you get a square root of the number of keys as a security so if you have squared of P many keys it takes you for the root of P time to find out what else is key is so if you want 128 bit security you have to choose your prime P four times many bits so a 512 bit prime but this is a talk on post-quad photography so where do we stand there elliptic curves would be totally broken nicely enough for isogenes we don't have any complete break there are some sub exponential attacks so it doesn't have fully exponential security as we maybe would like to have on the other hand with RSA and find a field if a harm and we have been dealing with the growth of keys with some experiment attacks so this is something we're familiar with it doesn't kill things but you look at the literature it's most asymptotic so we and also some others have been looking into details I think our analysis which we have at quantum that I started in August the most detailed one looking into like what actual security you get against somebody was a really really big quantum computer now elliptic curves you hopefully have also learned that you must always validate you need get a point somebody says oh this is my public key the first thing you do is check is this thing on the curve does it have the right order same thing for us on your base crypto for seaside you have a very quick check you check that this curve has a number of points you know what it is you don't even need to do full point counting you just take a point do some scalar modifications and you check it this is another thing that we've gotten totally used to doing this is another thing that is really really really hard for most post one systems most post one systems you add another proof to it so typically when you encrypt to somebody's key and you're sending something which looks like a key you reveal all the secrets in there that's why you can't reuse it or you have to do a big zero knowledge proof to prove that you actually generated this thing properly with seaside just there all you do is check it's a valid curve and you're done size is also pretty neat so 32 bytes for the secret key for 64 bytes so just twice as large as normal of the curves that is really like bottom left corner of dense graphics where there was nothing so seaside does fill a big gap a big gap at a small key there with something which pre corner has at least 128 bit security and post quantum so what NIST was asking for is comparisons with AS 128 and then you look at like how big are the sizes how big are the quantum computers and so on and we think that seaside 512 to the base of our knowledge based on the latest analysis will be as secure as that there's some code written by Lawrence somewhere ah Morgan's yeah this is on Skylake you're miles my very this is a it's not a super quick hack but it's not deployment code so this is not yet constant time there are some others you've been working on constant time it makes it about three times slower it is similar to psych in that it's really nice small keys but somewhat slow on the other hand this is still very new it's just from January so we're still figuring out ways to make it faster whereas well psych has been doing a lot of work getting to the speed that they have now so there's hope this will get faster and there's some hope it will remain a man broke until next year but well I'm not sure yet when I put my bunny at the thing at this moment I think actually seaside is a better chance than psych of surviving but who knows don't use it for anything yet speaking of broke there's a lot of people who are investing in crypto currencies and I think that I think it's Nick Matthews and fault this whole quantum cyber blockchain idea if you know something earlier than 2016 well anyway there's variations of it since then like quantum AI blockchain apparently you can buy the t-shirt we have about 10 minutes left so I'd like to finish things off with some comments on software now this is looking back at 40 years of public key cryptography while RSA was from 77 or so the McLeese crypto system from 78 and then well here's some some schematics of what the software quality has been in cryptography on a scale of good bad terrible and horrifying 1978 I don't actually know I haven't seen software from back then by 1988 it was clear that the software quality was horrifying by 1998 it had moved up to terrible by 2008 it moved up to bed and by 2018 it has jumped back down to horrifying and of course a major contributor to this is all of these NIST submissions which have code written by mathematicians who barely can implement anything and certainly don't have good code quality there's occasional submission teams that have people who can write code but in general if you well for a good time pick a random fine yeah yeah the classic McLeese code is fine there's other submissions where the code is fine but if you just take a random submission and look at the code it's well interesting if you would like to find out where the software is and download it yeah NIST doesn't work very well I did look archive.org has like you search for NIST round one on DuckDuckGo and the top link is to the NIST page and then you take the URL for that put it into archive.org and I tried a few of the the submissions and the zip files that NIST prepared with the specifications and the code those are available from archive.org I guess they got most or all of them you can also look for more than half the submissions there are upstream websites with newer code NIST has not updated the the code but lots of submissions the submission teams have lots of the fastest code and even some faster well improved code is available in our supercop benchmarking framework so this is the system for unified performance evaluation related to cryptographic operations and primitives bench.cr.yp.co and this one has well 170 primitives from 40 of the 69 submissions might have accidentally left out all of the patented submissions oh well the supercop policy is anybody who sends us code to put in will benchmark it it doesn't have to be unpatented it doesn't have to be secure we benchmark MD5 we benchmark RSA 512 but anyways there's 40 submissions where code is in there from other people or from me painfully going through getting code to actually work the primitives are for instance RSA 512 and RSA 1024 and RSA 2048 they're all RSA but they're different primitives they're different mathematical functions with different security levels and well in these submissions there's typically three different security levels sometimes more choices sometimes less and then a lot of the primitives have multiple implementations like reference code and optimized code for different platforms maybe so okay a lot of those are collected in this benchmarking framework all with the same API. LibPQ crypto I think I might have a few minutes to say a little more about this LibPQ crypto is focusing on having an API which is suitable for cryptographic deployment in the future if you imagine that the implementation quality of the underlying crypto is dramatically improved at least that interface layer is supposed to be something that we will be able to use some more examples of things out there PQM4 is a library optimized for small arm microcontrollers the ARM Cortex M4 PQHW is for FPGAs and this last one open quantum safe that one they don't have as many primitives maybe as LibPQ crypto or Supercop but what's cool about that project is they've got working integrations of all of these into open SSL and open SSH so if you're in say the TLS world then that's clearly the way to use these post quantum proposals at least quite a few of them inside TLS okay let me look a little bit at LibPQ crypto and then we'll finish this off there's lots of cryptographic libraries which give you a nice simple API for hashing they'll have some simple function like SHA256 which takes a message okay and see you have to say there's a pointer to the beginning of the message plus the length of the message and then gives you back some hash of some 256 bit 32 byte hash in a higher level language of course you say something like h equals SHA256 of m and m knows what its length is but in see it looks like you have h and m and the length of m as arguments why not do this for all cryptographic functions well somehow it's really weird lots of cryptographic libraries just have a nice simple interface for hashing and then if you want to do something like public key signatures it's well okay first we're gonna find the factory which is producing the keys while the the generator method for the key blah blah blah and well you can just say and what LibPQ crypto does is it simply says you sign something with whichever signature scheme you have to tell it I'm gonna put the signed message somewhere and then the length of the message is an output the message you're signing in the length or inputs and your secret key is an input and then it just takes everything in wire format produces everything in wire format you don't have to have conversion functions input output serializations etc this is actually an API that goes back to we've been doing supercop for many years and supercop the salt library Lib sodium etc are all using the same API and this is something which actually people have measured the impact on usability of cryptographic libraries depending on the different API provided by these libraries and so we're pretty confident about benefits of having a nice simple way to use crypto this looked at this and said well okay yeah people should submit something in to the post quantum competition using this API but they didn't have test code that people could use to make sure that they were they were following the rules they didn't require that everybody pass any particular set of tests and they accepted submissions which didn't work for example in supercop so well okay that's why I had to do a bunch of work to integrate a bunch of submissions into into supercop but it's been sufficiently close to everybody using this that there has been a lot of code shared between these different projects open quantum safe is also starting from the same API and then providing higher level integrations into open SSL and open SSH I okay there's a bunch of different signature systems and a bunch of different encryption systems in the peak of crypto here's an example of what the higher level API looks like in python if you want to use the peak of crypto and sign a message well first somebody has to generate a public key and a secret key using the peak of crypto library signature system here's one of the signature systems Sphinx is stateless hash based signature system you don't have to record anything when you sign message and then 128 is security level 2 to the 128 using the shot to 56 hash and well you just have to know this name and then you say give me a key pair sign a message using a secret key open a message using a public key now this is not you get a signature and then you do verify of a message and a signature this is another little API detail designed to protect people against screwing up there's lots of applications which verify signatures and then if the verification fails nobody's ever tested it and the verification failure is ignored what actually works to protect application programmers is have an interface where you have a signed message as one bundle it goes into the opening a signature opening a signed message and producing a message and the cryptographic library does not produce a message as output if the signature was invalid so the signed message is produced is handled by the cryptographic library producing a message if the signature is valid also there's an exception being raised but even if you ignore the exception in python or if you're using a lower level language without exceptions then you just aren't given back a message this is what lots of little thought that goes into the API maybe a bigger example in python this is a whole thing of using the library generating a key signing some random message and opening the message okay what's going to happen in lid pq blip pq crypto coming up is first of all one of the big problems with code quality is there's lots of exposure to timing attacks I saw a great talk earlier today about specter and there's lots and lots of these attacks and part of fixing these attacks is fixing software along with we're gonna have to do lots of hardware fixes there's been some work on some implementations to fix this but much more is required need a lot more work on correctness lots and lots of the code doesn't even pass address sanitizer and this was I don't want to tell you how much pain to get code working in address sanitizer where I mean anybody writing code professionally is going to be using these automatic tests as they're writing the code and this is something that just doesn't happen when you ask a bunch of mathematicians to write code formal verification will do much more than testing and do much more than say address sanitizer does and much more than even an expert auditor will do that formal verification is going to guarantee that your code is doing what it's supposed to do for every possible input which I used to be very skeptical about because it seemed so painful to do for any realistic code but I've started getting much more enthusiastic about it because the tools are getting much much better and one example of something I did was a sorting verification where some really fast sorting code is actually completely verified to work correctly the machine code is verified so you compile it even if there's compiler bugs then the machine code is what's verified so the the verification isn't going to rely on some compiler being correct this is using the anger toolkit also I don't know if there's any trail of bits people here manticore I understand has similar features I use anger but well there's really cool tools out there for doing symbolic execution and as part of that formal verification speed is important and trying to get the code volume down there's lots of duplication we need more internal libraries to get post quantum crypto on a smaller easier to review code base and finally hopefully at the end of all this will be able to throw away as many primitives as possible and focus on a small number of things where we can say we've really seriously reviewed these we've reviewed the designs we've reviewed the implementations and we're confident that these things are secure that's it thank you for your attention if you would like to leave at this point please do that very quietly we'll have a short round of Q&A single angel your first question from all the submission in code base are there any other ones that use smaller keys use smaller keys you said yeah from all the so code base cryptography there's two submissions classic McLeese which are highlighted because it's ours and there's NTS chem which has these gigantic keys both of those are using copper codes which is what has received the most analysis so far but at this gigantic list of yep what Dennis is showing here several of those are actually code based so big quake for instance down there is a code-based system then lake that's that bike is one leg is one down there so Lake would be fitting this by saying it's very small keys and signatures and cybertext the downside is it is using far less well-studied codes so we need to see how that develops thank you for the people in the room please try to limit your questions to a single sentence microphone number three your question okay how exactly do you define post quantum crypto I mean you have shorts algorithm you have the other algorithms but do you say okay it's just secure against factoring discrete logarithms or do you also take into account optimization problems and stuff like that yeah so so I mean the definition is we're trying to protect against any attacker who has a big quantum computer and we have a rough understanding of what quantum computers can do because they're limited by the laws of quantum physics which tells us that okay if you can build a computer that supports what are called toffoli gates and had a margates then you can well it's not completely proven but it's very plausible that you can simulate the matrix at that point yes that's the universal model yes you have a universal quantum computer at that point the problem is how do we know even if we say well okay by believing that quantum physics is everything we can do in the universe that tells us we have a computation build out of had a margates and toffoli's that doesn't tell you what kinds of algorithms you can put together and there's this big problem that's always been a problem for cryptography is trying to imagine what all possible algorithms are and sometimes people miss something and so if somebody ever tells you all the system is provably secure there's there can't possibly be an algorithm which is faster than this to break the system no there there's no guarantees and lots of people have been over confident and burned because there is actually a faster algorithm we've had a lot of work on people trying to figure out good algorithms using quantum computers for instance for the sub exponential attacks that Tanya was mentioning against seaside and that's something where that there's a long history to those attacks starting with cooperberg's algorithm and this is going beyond shores algorithm and grover's algorithm and it's really important to look more at what sort of quantum algorithms could attack cryptographic systems there's been some some initial work but there definitely needs to be more I mean our takers allowed to do whatever they want that's why I'm showing this as a taker the taker is not playing by the rules the only thing we know is our attacker has a quantum computer okay right I'll sing your next question question from the internet size does matter but what about the performance of post quantum cryptographic compared to classical algorithms for embedded or FPGA devices for example firmware signing or communication and encryption okay so on the big list they quickly firing up so pqm4 that's using an m cortex arm m4 so that's a rather small device they did not implement all algorithms and for some of them they said it is very cumbersome to do with the big keys so yes it's more of an issue I mean we spoiled with elliptic curse just having 256 bits there and all of the systems are larger than that seaside is the closest you get but then it has a big computation but there is effort and the smaller and more fitting systems have been implemented hopefully we'll get better thanks microphone number four your question you said when Google did some tests they said it's just too slow they cannot really use it what the solution be acceleration units like used for AES and CPUs so Google was excluding the use of the super single isogenes based on speed I assume that's what you mean rather than the big ones with the bandwidth I don't know all the details of it my assumption is it was factoring in also the security like how much time have people spent analyzing it which made them more comfortable with the structured lattices than the super single isogenes you can speed things up if you have a big engine which would be manufactured to find filter arithmetic but that is much much bigger than say in a gas engine maybe just an extra comment I think that the the choice they made of entry HRSS is really an excellent choice of it's something which is small enough to fit into most applications I mean a thousand bytes or so it's much bigger than elliptic curve crypto but compared to all the data we tend to send around it usually fits unless you got like some really small communication happening then then you usually can fit a kilobyte or so which is the the entry HRSS sizes and it's something which it's got some history of study I would be the last person to say that lattices are definitely secure and we actually our entry prime submission is worried about ways that something like entry HRSS could maybe be broken but there's no evidence of any problems and entry who has held up for about 20 years of study without being broken so it's also reasonably fast so it's a reasonable compromise between the different constraints to try and have something secure and not ridiculously big and well if it gets broken then then we're in trouble but hopefully it's okay thanks signal angel the final question please the final question can see side around on the hardware accelerator made for regular elliptic curves or is it is the handling of isogen is more problematic all right so depends what your hardware accelerator has if it's like one of fairly generic elliptic curve arithmetic you can probably use it we're getting some speed from not using elliptic curves and virus trust form but Montgomery form so you probably would want to modify the accelerator they're currently using to fit this better also most systems are optimized for 256 bit elliptic curve or 384 with 512 bits we're a little bit outside but most of the operations would be looking just the same the most time we spent on doing a big scale of multiplication and then we have some operations in these isogenes but they are fairly similar if you have like the field arithmetic build up you can just put these together and have an isogenic computation as well so yes it can get faster as I said this is from January we still work on the security analysis so don't build any hardware at this moment quite yet thank you so much please give some of you Toronto for talk