 So welcome to the second session on symmetric cryptography. We have three talks. The first talk is entitled, Too Much Crypto, will be given by Jean-Philippe Au-Maisson. Thank you, Bart. Good morning, everyone. Good morning to the people in the live streaming and the internet. Very happy to be here for the first time. Don't know if you've seen the paper. Some people called this talk a bit controversial. To me, IGP is controversial. I'd like to reassure you, there's nothing controversial here. Don't worry, I think you're going to love it. So very quickly, I don't have much time. There would be three parts. Trying to explain the problem that I'm talking about. Second part, trying to understand the problem and trying to fix the problem. The first part, I'm talking about symmetric crypto, not publicly crypto. So only symmetric is here. No RSA, no electric curve. So you know the AES, you probably know Blake, too. You certainly know SHA, and you probably know SHA-3. So you know the term security margin, which is the difference between the rounds broken and the total number of rounds. So here, I just try to visualize the difference between the total number of rounds, 100%, and the rounds broken. What the definition of broken is any attack that potentially would be on paper faster than the generic attack. Excluding things like rated key distinguisher, things that are not really attack. I already see that some people disagree about this, but that's fine. So now what about practical attacks, where you can actually make an attack that would meaningfully compromise security of the system, where you can actually run the attack before the human space is go extinct? So if you look at AES, it's a bit less. So you compare before it was 70%. Now it's around 50%. A black 2 is a bit lower. SHA-3 is here. But you see the difference between the bars and not at the same level. There's a big difference. So the upshot here is that the security margin are not consistent across primitives, which is quite obvious, but it's not how it ought to be. So we have to ask ourselves, why is it like this? And can we do something about it? The second point, let's look at the attacks. AES, five rounds of 10 rounds, at least, in total. So here it looks amazing. We start from 2 to 45, then 2 to 35, then 2 to 25 and 16. So it's not a 10x speedup. It's a 1,000x speedup. Like cryptographers, they're brilliant. Every time they make an attack, it's 1,000 times faster on five rounds. Now you're wondering what happens next. What about six rounds? Completely different picture. The six rounds, it's just one more round, but it's much, much harder to attack. And presumably, it's much stronger. We start from 98, 2 to the 72 operations, whatever an operation is, to the 44, and two years later. And I don't know what people are doing. They care about other stuff. And 2018, right, to the attack, it doesn't get better. It gets a bit worse. What did Bushnell yourself say? Attacks always get better or not? So what got better here is the data complexity, which is instead of 2 to 35, it's only 2 to 26. But the attack is a little bit slower. Is it broken? Do you think that means the cipher is broken when you have this kind of attack? And when you have this, so here it's broken, here it's also broken. Yes, it depends on what broken means for you. So what about seven rounds? So remember, yes, it's 10 rounds. We look at seven rounds now. 2,155 time. Data, so data is the amount of data you need to collect, the number of chosen cipher tags or chosen play tags, or both. And memory is how much stuff you need to store. So today, people try to pay attention to count memory in bytes. Sometimes people count memory in stuff, in, I don't know, plain text or something else. But that's just an order of magnitude. So 2,155, 2,013, 99, but 97 here. So you're like, OK, that's 99 is obviously a lower number than 155. We all agree on this. But is this attack more efficient than this one? Raise your hand if you think that this one is more efficient. OK, so a few creative persons. 2018, so the exponent here is much higher. But the exponents there are lower. So is the attack better than this one? You see how to say. Because it's not just comparing numbers. It's easy to compare two numbers. But when you have two different numbers, it's how much weight do you give to different numbers? Well, anyway, let's go back to another cipher, Chacha. 2048 and 2038. But this was just an approximation. It was not a really good analysis. I said because I didn't myself. And here you see 1,000 times in 2016. It's not really a practical attack. You see, it's on the full 20 rounds. So the effort here is that attacks, they cannot get better, but they don't get better spectacularly on these primitives. So of course, if I publish a cipher tomorrow that I don't really know what I'm doing, someone will find an attack in two days. And someone else will find an attack one week from now and maybe one month from now the attack will get better. But here we're talking established primitives like AES, Chacha 3, and Chacha Chacha. And more generally, my point from this paper in talk is not that we should be very, let's say, take more risk and reduce security margin. My point is that in 2020, we have, let's say, a very mature state of research in symmetric cryptocurrencies, and particularly regarding these four primitives that I mentioned. So all the attacks you see in all the thousands of people that are published are mostly variants of differential or linear cryptanalysis. You have thousands of papers, thousands of PhDs, a lot of conferences, and I mean, look, even this doesn't go. They're not really broken. I mean, the blocks are too short. The keys too short for this. There are some kind of structure that might be exploited, but I would still be fine equipping my data with triple this. Now let's go back to AES. You look at this number. What does it actually mean? Maybe for crypto people, we compare numbers. We do this game. We make a new attack. We see if it's faster or not. But concretely, so to the 61, it's maybe according to the next paper and the next talk, is approximately the cost of the attack on Chawan, the chosen perfect second attack. Maybe you heard about blockchain, Bitcoin, amazing technology. To the 76, every time Bitcoin block is mined, every 10 minutes, I'm not mistaken. Just not of magnitude, 88 is not a second since the Big Bang, 14 billion years ago. And that's a relatively accurate order of magnitude of the amount of information that can be stored in the volume of the Earth, taking consideration of the holographic principle and all the freedom degrees. So if you have an attack that takes 2 to 300 memory, I'm sorry you'll need to go somewhere in space and use a little bit more space. So at some point when the numbers are big enough, it doesn't make sense at all. I mean, we're just too small to relate this to anything tangible or concrete. It's just impossible. John Kelsey from this said this during the Shastri competition. He said, difference between 80 and 128 bits of security is like difference between a mission to Mars in a first century. And he claimed that difference between 190 and 256, there's no real difference. It's just impossible, impossible, because we'll never perform such an attack. So I'm not talking just in terms of key size. I'm talking in terms of bit security, because you might think, oh, yeah, but quantum attacks, Grover, da, da, da. So that's something different. And Adam Langley wrote a blog post where he used a study of different terms. He talked about infinitely strong. So he's claimed that everything above 128 bits was infinitely strong, which means unbreakable. Like these companies tell you we have unbreakable encryption methods are great. That's what it is, infinitely strong. So the theorem I would like to publish today, we have any proof, of course, is that if you have an attack that takes 2 to the n time, or memory, or data, where n is 128 or more, then that might be a very nice piece of work. But that will never be executed by any human, or maybe in other place in the universe, but not on this planet. And there are, of course, a caveat of quantum speedup. If you, of course, have a exponential speedup, I mean if you believe that quantum computers will exist, and if a quantum computer exists one day, this will be compromised. So I'm running out of time. How do we choose run numbers? So very quickly, here the conclusion is not a very scientific process. During crypto competitions, we play surgically. We don't want our submission to be eliminated. So we try to think, how can we have a run number that is high enough so that the attack, the server, will not be broken? At the same time, we want to be relatively fast, faster than the others. And we want to avoid the submission being rejected because someone found a crazy boomerang, zero sum distinguisher. So it's not really scientific, it's not really rational, it doesn't make sense. It depends on the, let's say, symmetry of perspective. So the question is, do we do too many or too few rounds? Let's try to think about what is an attack. So I'd like to make the point, which is maybe pretty obvious, but that most attacks published are not actually speaking favor of the security of a cypher, because they're negative results. They mean that very smart people spend a lot of time trying to break this primitive and only broke a small variant of it. And a variant might be by weakening the internals. Maybe you remember Shian, the version of Shabu and Zhu, where they modify when they linearize Shawan. You might consider weaker models like created key, which doesn't make sense in practice. Or you might think, oh, but I've seen one case somewhere in one stupid product where we could do rated key attack. Yes, but that's a total exception. No one cares about rated key attacks, I'm sorry, and distinguishers. It's also a different goal. Not everybody likes it. So negative results are very important. Publish not only attacks, and there's a conference for it. See, fail. I heard there will be a version in 2020. Maybe it will be in New York again. I don't know. So how to interpret negative results? Let's say you have an attack this 2 to 48 on seven rounds, Cha Cha. So the first way to interpret it is Cha Cha is broken, because you see, I'm smart, 238 is smaller than 256. So forget about it, eliminate the cipher. That's what happened in crypto competitions. But that might not be, let's say, the most reasonable view. The other way to interpret it is there's an attack. There's this kind of attack. There's something. There might be improvements, and these improvements might eventually lead to something practical, something that we have to worry about if you're more pessimistic. If you're more optimistic, your view might be, oh, man, these guys have spent, like, months trying to break it, and the only attack they could find is this 238 thing. So I'm pretty sure that Cha Cha will be safe. So I don't know. Your view on this, but you see, it's not extremely rational. There's no simple solution. It's about risk assessment. And risk assessment is generally something that is really hard, that is not really similar to the job of a cryptographer. And that's why cryptographers are not really good at it, because it's a different job, it's a different mindset. Because risk means more things can happen than will happen. So we can always think, OK, but what if this happened? We have to, you know, buy an insurance when you have an insurance settlement that comes to your place and tells you, oh, but what if this happened? You need to buy an insurance, and they try to scare you. So we see it's about emotions, not only personal stuff. And choosing rounds is something that is really risk assessment. So what are examples of bad risk assessment, bad risk thinking? That's things that I've heard, except for the last one, from serious cryptographers, you know, academic researchers, very respected, very respectable. What if a practical attack is found on AES? Well, what if p equal and p, you know? There's no security proof of AES. It makes me a bit nervous. OK, so don't use AES. Use one type pad. OK, have a good day. I don't believe that ARX are secure. You know, I don't believe, you know, we're not talking religion, we're talking, you know, science. We need this number of rounds in case N rounds are broken. We need, you know, 20 rounds in case 19 are broken. Yeah, but what if 20 are broken when 21? And this is, yeah, something stupid. So, I mean, we cannot say, OK, what if, what if, what if, what if we live in a simulation? What if in the matrix? What if we're in a shoneys with Rick and this alien bug? So we see it doesn't make sense. And snare slow, which sounds like tautology, it's like smarter than it sounds. So what Bruce meant is that we should, you know, anticipate the fact that attacks will get better. So snare slow is attacks always get better, never worse. Yeah, of course. Another point is that the attack cost will always get better, even if there's no new attack. Because look at the price and effectiveness of GPUs today compared to 20 years ago. So an attack is much cheaper in terms of whatever reasonable metric you might want to consider. And what better means, take the example of Shawan. There was this breakthrough in 2004, which was, you know, something really new. But since then, there were, I mean, no fans to Mark and to Gaito and Toma. But these are refinements, improvements of the attack that was found back then. But we didn't go from 2 to the 69 to 2 to the 19. It's essentially the same attack technique. And a lot of people have been working on different cryptocurrencies, trying to understand Shawan and SHA-2 and SHA-3. And so the attack got better, yes, refinements, new applications. But we don't have a premade attack on Shawan. We don't have premade attack on MD-5. We'll never have premade attack on MD-5. Also, I believe. And also, it's good to be paranoid. It's good to, you know, anticipate future attacks. But we have to keep in mind that crypto is never used alone. You have software, hardware, people, processes. And all this stuff around is way more likely to break. So it doesn't mean that we should take more risk than we should. But more systems can be compromised, not in 2 to the 80, not in 2 to the 40, but maybe in 2 to the 20, whatever. So, yeah. It's hard. And a really good example, if you look at this book by engineers, by site gravity engineers from Google, they made the phone in point because they are SREs. They did with resistance that are really attacked every day. And they say, in reality, anyone with time, knowledge, or money can undermine the security of a system for a small fee. Anyone can purchase software that enables them to take over a computer or mobile phone to which they have physical access, government, scrutiny, buy, or build software to compromise the system of their targets. Now you might say, ah, but just with an HSM. But did you see the tokens today? Yeah, but HSMs. Oh, just with HGA, SGAs. Yeah, SGAs, again, it's not perfect. It's very fine technology. But you always have ways to compromise systems. And you see, they don't talk about crypto. OK. So what we want, let's try to fix the problem. I don't have the solution. We even have a solution. I maybe have directions to do something a bit better. Have a more scientific and more rational approach to choosing number of rounds in the crypto we create, in the crypto we use. Because with a glove, I'm optimizing the code to make it faster. But if you just do fewer rounds, magic, it becomes faster as well. Try to have more consistent security margins across all the components of your system, not just crypto. And maybe try to use words. So once you start naming stuff, the stuff exists. And it's easier to have a discussion about it. So attack taxonomy, for example, I have this example of names. You might say that a Cypher has been analyzed if someone found an attack that is, let's say, not more efficient, even on paper, because you have two to the 200 memory, two to 200 data. But still, it's something that says something about how the Cypher works. It might give you some insight on how to better understand the algorithm. But it's still something green. Amber, we can talk about attacks when you're practically not more effective than generic brute force. But on paper, the numbers are smaller than brute force. So what I would call wanted, for example, is you have an attack where incremental improvements are potentially put you in the danger zone when things start getting a bit scary. Like if you have now two to the 90 or two to the 100 complexity, maybe if you have a 1 million false speedup, you go down to 80, and then it starts being scary. And in this case, you can call it broken to the 80. So it's still way more, it's still a bit more than what Bitcoin is doing. But with enough money, you can relatively reasonably do it if you have a really high value target. But then again, you might argue, everybody can do it. Every government can do it. Most add NSA, but most of the time, they have other solutions which are more effective. So to conclude about round correction, I'm not saying we should reduce the circuit margin just correct, try to do something a bit better than what we used to do. So people try to do it, maybe the most open-minded, where they get the Ketzak team who is in this room. And I'm a bit guilty for this move from 18 to 24 rounds during the Chelsea competition when I found something that was not really an attack. But the Ketzak team felt that, and they were probably right, that they had to increase number of rounds to avoid this non-attack. But because no dangers, well, I don't know. I'm not nice. I don't want to speak on behalf of anyone else. But when Ketzak was chosen, it might have been difficult at this time for NIST and other people to say, we should do fewer rounds, because everybody was suspicious, was suspecting NIST of backdoor, so I don't know. So they left this 24 rounds, but it's still too much. I mean, probably Johan agrees, and that's why they reduced it to 12 or 14 or 6-4. And Salsa, I think, Eastern approved 12 rounds. I don't know what other people think about it. And in 2006, I think Dan Bersin, he expressed the following view that he was comfortable with 20 rounds, but much less with 8 or 12 rounds. But that was in 2006, so I don't know what he believes today. So my proposal, OK, I don't see anyone screaming or living in the room, OK. Just trying to have a bit more consistent margin. It's not perfect, but what about AES with 9, 10, 11 instead of 10, 12, 14? Black, I would put 7 or 8, and there's actually a good reason to do it. Cha-cha would be fine with 8, seriously. And Cha-3, 10 instead of 24, it's probably very reasonable as well. And oh, by the way, as a byproduct, you get a major speed up, 2.5, 2.4, 1.45. Makes things faster, less energy, saved the planet, and so on, so really good move. But if you look at the curve, it's still not perfect, because you see, OK, you might wonder why I picked this number of rounds and not the rounds that give a completely flat curve, because it's not only about these numbers, it's about other attacks, like distinguishers that tell you that there might, at some point, be something on this number of rounds. So I'm running out of time. I will publish the slides. I tried to anticipate some objections that people might have, the what-if objection. But maybe the more interesting are here. On Cha-1, one might argue that, look, it took, I don't know, 15 years to go from one attack to the attacks of Maroc, of Gaetan, and Tomah. So it took all this time to do this small improvement. And if you assume that an attack exists much farther than this, it will take exponentially more time to find it. But the attack might exist. And my point is quite the opposite. I might be wrong. My point is that not that the attack is hard to find, but that the attack does not exist. And another objection is that my round correction is stupid. It might be, but I'm happy to hear what you have to say about it. So your conclusions of this talk would be great to have a new revised standard out with fewer rounds. No, I'm not holding my breath. But that's maybe in a medium term. Cryptocompetitions, if this is in the room, I strongly recommend that you pay attention to this problem. And when you select a cipher, don't eliminate, don't punish the submitters because they were conservative. So a hashtag or a support. And implementations. If you are a library, you might want to implement to support versions with lower rounds. But then again, how many rounds do you do? That's a good question. So I recommend that you read the full paper. I only had 25 minutes. I would like to thank Samuel, who was very helpful in doing this paper. He gave a lot of ideas, all the other reviewers. And yes, I said I will put the slides online. And the paper is already online. Thank you very much. Re-attention. It's time for a couple of questions. Oh. Bravo. I deal in the constrained system environment, and I watch the vendors leave. And any discussion of this, because I can't afford this in the cost and timing. I can't afford this because of the heat on my battery. Go away and leave me alone. Thank you. Now, let me get some reality in this. Thank you, sir. Very good update on the state of the art of crypto analysis. The unknown unknowns that may be in the future is up for the designer. Because the real risk management is people who design ciphers that want to not to risk their name, their prestige, so they choose the thing. The real comment is about the bullet about implementation supporting faster versions. It should not be an open parameter because this will lead us to two rounds. And this will even to attack on a related run. There should be a fixed fast version, a fixed slow version, and people should be forced to choose them. Because this will lead us to the known 128-bit RSA implementation. Thank you, Mati. By the way, legal disclaimer, I'm not responsible for any damage related to the point in this talk. Thank you. Yeah, Jean-Philippe, thanks for a really interesting talk. So yesterday, Roberto was asking, is NIST in the room? We didn't want to participate in this lame joke, but actually there are quite a lot of people here from NIST. And this is one of the things that's now on my plate. So NIST is going to do, as was announced already some time ago, reviews of all of the NIST standards. The first that is going to be reviewed is AES. There's a lot to be said about how to choose the number of rounds. It's something for which I agree with you that it is not so much technical, but I wouldn't say that it's non-technical actually. It's standardization. It's community consensus. It's finding out what people feel comfortable with, don't feel comfortable with. Of course, you need to make a good trade-off between security and efficiency. But a lot of the things that go on behind it have not really been explained anywhere in the document. Actually, the result of a lot of conversations I had where people explained their views and seemed to be more or less on the same line. So I hope that when the first AES review comes out that people will have an understanding of what number of rounds mean, what security margin means, how those parameters were chosen. And then, of course, every NIST document has a possibility of public comment. People can comment and say whether they agree or disagree with the calculations in there. You say you don't exclude the possibility of NIST standardizing a variant of AES with more of your rounds. So that's an interesting question. I've actually had people tell me yesterday, don't go to your talk because NIST should not be listening to any of the ideas of reducing the number of rounds. I liken my position to focus more on the technical stuff and to give advice that makes sense from a technical point of view. In the end, I'm not going to get you no person in the audience. Can you please cut this short because we're running out of time. So if there is a question. No problem. Thank you. Thank you very much. Second talk is entitled, The First Chosen Prefix Collision on Shawan. Joint work between Détan-Huron and Thomas Parin and Détan will give the talk. Thank you for the introduction. So I'm going to talk about Shawan. So you probably know Shawan is a hash function. It was designed in 1995. And it's been broken about 15 years ago. So this was actually a breakthrough result at the time. So attack really do get better. We went from no attack to a significant attack. And this attack was theoretical for a long time because the complexity is relatively high up to the 69. And it's also a very technical attack. So it took a lot of effort, a lot of theory and practice before this was actually implemented. And this was done about three years ago by Mark Stevens, a co-author. And this whole line of work got them the left-chain price yesterday to Wang and Stevens. So Shawan is a dead hash function, a broken hash function. Why am I talking about it? So the first reason is because of science and because it's fun to do cryptanalysis. But the second reason is that Shawan is actually still used in the real world, in many different places. So if you look at X501 certificates, for instance, it's still possible to buy a Shawan certificate for legacy reasons. And it will actually be accepted by many modern clients if you go outside of web browsers. And in fact, if you look at the ICSI certificate notary, they estimate that, well, they see around 1% of Shawan certificates. So apparently it still exists. If you look at PGP, you can also use Shawan signatures. And actually, if you take the legacy branch of GNPG, the default hash function to sign a key is Shawan. And if you look at public signatures in the web of trust, there's about 1% of Shawan usage here also. Another important place is in TLS and SSH. After the handshake phase, you usually sign a transcript of a handshake. And here, Shawan is very widely supported. And it's actually used by a few percent of traffic. So it's really used in a lot of different places. And probably used in a lot of more obscure and hidden protocols. Actually, it's used in credit cards, for instance. There are some kind of Shawan signatures there. So why is a broken function still used? Well, one of the reasons is that a collision attack is relatively hard to exploit to really break something in practice. Because what is a collision attack? Basically, you run a very big computation. And at the end, you get two messages that collide but that are basically garbage. So it's hard to really do something useful with it. So you can play some tricks. You can put a prefix before your garbage. You can put a suffix after. This can be helpful if you have some freedom in your document format. If you can do an if switch, for instance. This is what was used to create colliding PDFs, which show a different document. But you really need this freedom. If you have a more constrained format, this identical prefix collision attack is really not sufficient. So there's a stronger attack that we can consider. And this was first proposed in 2007 by Stephen Slantzstrand of Eger. And they called this a chosen prefix collision. And this was demonstrated on MD5 at the time. And the idea is to try to start from two arbitrary documents and then to make them collide by just putting some garbage behind each of them. And this is much more powerful, because now you can put useful differences in your prefix. So this is really easier to work with. And you can actually break certificates with this. And you can break TLS and SSH. This has been shown in a series of work. But of course, it's much harder to do this type of attack, because now you have to start from some arbitrary difference in the state at this point here, instead of starting from a zero difference back here. So this is the type of attack I'm going to talk about. So this is something we proposed a theoretical attack at Eurocrypt earlier this year with Thomas to do this type of attack on SHA1. And today I want to announce that we have made this attack practical, and we actually run the computation. So we have three contributions since the Eurocrypt talk. The first thing is that we have improved the complexity of attacks on SHA1, by a factor 8 or 10. This applies both to the identical prefix collision, the shattered one, and to our new results. And in terms of cost estimate, an identical prefix collision should now cost around $11,000, and a chosen prefix collision around $45,000. So this is a significant amount of money, but it's clearly within range of the state adversary, and it's even within range of academic researchers like us. So the second thing is we actually run this computation, and everything worked as expected. So it also validates the whole attack strategy. And finally, we applied this to attack the PGP Web of Trust, and we did an impersonation attack by creating two keys with different IDs, but a colliding certificate. So I will not talk about the first part, because I don't have time, but I'll try to explain how we did the computation and what we did with it. So how does this attack work? So the basic idea of what we presented at Eurocrypt earlier is we want to start by finding a set of nice, chaining value differences. So we want to build a big set, and those differences, they must be nice in the sense that we know how to erase them and to lead eventually to a collision. And then we have two phases in the attack. The first phase, where we start from an arbitrary difference, and we want to go to the set of nice differences. And then a second phase, where we go from this difference to a collision at the end. So this is the basic framework. So how do you run such a big computation? Well, there are basically two options. The first one is you go to Amazon or to Google, and you rent some GPUs. And we estimate that this will cost around $160,000 to run over attack. So that's still a bit too expensive for us. The second option is to look for cheaper GPUs. And this is actually possible because a few years ago, there was a big cryptocurrency bubble. And some people bought a huge amount of GPUs in order to mine cryptocurrency. And then the price went down. And now they have this huge pile of GPUs. And it's not so profitable anymore to mine cryptocurrency. So they are quite happy to rent their GPUs. And you can get prices three or four times cheaper than on Amazon or Google. So the drawback, of course, is you get something that's not quite as reliable. You don't get those very fancy GPUs with very high, very fast communication between the GPUs. But, well, for our purpose, we didn't need any of that. So it was quite good to use this solution. In terms of pricing, the price actually moves significantly depending on cryptocurrency prices. So this is a graph showing the price of Bitcoin on Ethereum. You can see a big bubble in 2017, then a big crash. And we actually negotiated our price around this time here. And you can see it was not a very good time because the price were going up significantly. So we would probably get much better prices today than what we got. So the first part of the computation is the birthday phase. So we have to start from some arbitrary difference in the state and go to one of the nice differences. So we have a set of two to the 38 nice differences. And this means, due to the birthday paradox, if we take two to the 61 random messages and we look at all of the pairs, one of them will be in the set. Of course, in practice, we don't want to store two to the 61 messages. And we don't want to look at all of the pairs. So instead, we use techniques that come back to Van der Schot and Wiener to do parallel collision search. And this is quite efficient to parallelize and relatively easy to do on GPU. So this part worked quite well. And it took us about one month of computation to finish this first phase. And it was actually about twice as much as what we expected. So we've been quite unlucky in this step of the attack. Then we moved to the second phase, the near collision phase. And here the goal is starting from this nice difference to erase the difference until we reach a collision. So this part is actually extremely technical because each of the near collision block is basically the same type of work that you do in a collision attack, like the shattered attack. So this attack was, of course, very technical. That's why it took more than 10 years from the theoretical attack to the practical one. And for each block, we have to build a new differential trail with some specific condition at the beginning and at the end. And then we have to come up with some GPU code to find messages following the trail, and then we have to run it. So to simplify this a little bit, what we did is for all the blocks, we use the same differential trail, basically. Well, the middle part of the trail is the same for all blocks. So we only have to construct the nonlinear part at the beginning and then to make some small adjustment to the GPU code. And we started from the attack that was implemented in the shattered paper and we started from this GPU code. So in terms of implementation, this step also succeeded after roughly one month. We actually got quite lucky because our estimate was to need two to the 62.8 computation and we only took two to the 62, in our case. So in the end, in both phases, it kind of averaged out. And finally, we got our chosen prefix collision in last September. So the collision looks like this. You have a prefix at the beginning in green. So this is something we fixed before running the computation. The yellow part have a birthday bits and then you have nine blocks of near collision. So what can you do with this kind of chosen prefix collision? So as I said, we want to attack certificates. So what is a certificate? Well, it's basically a document where Alice is putting a public key and the document says the public key of Alice is something, something. And then you go to a certificate authority and you ask for a signature of this document. And then you can use it to prove your identity. So if you want to attack this type of system, you can do an impersonation attack. And the way you do this, so the attacker, Bob here, will create two different certificates, one with his real name and one with the name of Alice. And if those two certificates collide with Shanwan, when you go to a CA and you ask for a certificate of the, for a signature, sorry, of the certificate with your own name, so you should get this certificate because, well, it's really your name in it. So you can get this signature and then you can just copy the signature to the other certificate because there is a collision. So that's how you're going to do an impersonation attack. And in the case of X509 certificate, the identifier goes at the beginning, so you can use it as a prefix and then you have two different prefix, one saying Bob, one saying Alex. Then you compute your chosen prefix collision, you get a bunch of garbage, you put this garbage inside the public key and then you have your colliding certificates. So that's how it was done in 2007 and in 2009. And this was also actually exploited in the wild in the flame malware. So this is really a practical type of attack. So in the case of PGP, it's slightly different because the structure of the certificate, the fields are not the same and the public key goes before the identifier. So it means we cannot use this nice trick where we put the identifier as the prefix. So we need some slightly different types of idea. And what we did was to use two keys of different lengths because of course all fields have the length as a beginning, as a prefix, so that you can parse everything nicely. And so if we have public keys of different lengths, then the remaining fields will be misaligned and we can use this to stuff the things in the right position. And the second trick we use is we're going to create a key with a picture in it because the PGP format allows you to have a picture in your key and this picture can be signed together with a key. And the picture is a GPEG picture and the good thing about GPEG is you can put random garbage at the end and it will be mostly ignored. So this is what we did and the structure of the certificates look like this. So we start with the length of the public key which is different on both sides and this will be the prefix that we use to run the chosen prefix attack. Then we run the chosen prefix attack, we get a bunch of garbage here, we put it inside the public key on both sides and then we have a colliding state. And then whatever we do after this collision we need to have the same thing on both sides in order to preserve the collision. And so what we do, we put the picture on the side of Bob and then we just copy this inside the modulus on the other key and then we put the identifier here after the key on the side of Alice and this is just put as garbage after the picture. And so everything is correctly aligned and the field are correct on both sides and you really get two keys with colliding certificates. So a nice thing with this attack is also that the first thing you do is to compute this chosen prefix collision before you even select your target. So you can actually reuse your collision to attack many different targets. So you can amortize the cost with $45,000 that's to break as many keys as you want. So we reported this to GNU PG in May and they actually fixed it and now the modern branch of GNU PG they don't accept SHA-1 signatures anymore. So to conclude, the main message here is that SHA-1 signatures can be abused in practice. So this is really a practical exploit on SHA-1. So SHA-1 must be deprecated. It should be removed basically everywhere. The situation we are now is similar to the MD-5 situation in 2007 and it was clear at that time that MD-5 must be removed. In particular, something important is that if you support SHA-1, even if you don't use it because there's a negotiation in many protocols, but if you support SHA-1, an attacker could downgrade you, force you to use SHA-1 and then you will be vulnerable to an attack. So we should really deprecate SHA-1. I mean as a cryptographer, when I connect to MSN.com and I see that there's a SHA-1 signature being used, I mean it really makes me sad. I think we can do better as a community. SHA-1 has been broken for 15 years. We have much better alternative. So we need to get rid of SHA-1. So I'm sure there are people in this room who are involved in security projects that still use SHA-1. So please help us, please deprecate SHA-1. Thank you. Thank you. Kenny. Thank you for the great talk and the amazing result. I just wanted to ask a technical clarification. So in your construction with the RSA, you don't completely control the public key. So do you actually know the private key corresponding to these RSA public keys? Yeah, we know the private key. We can control the low of the bits here. We have the bang signs here. And this is enough to build a key that we know the factorization of. Okay. So another quick clarification if there's time. You... Oh, Leslie, I'll take it offline and ask you offline. Okay, sure. I apologize for a very naive question, but in view of the previous talk, you think SHA-1 would have been saved by having just a few more rounds? So SHA-1 is clearly a case of not enough crypto. And yeah, if you increase the number of rounds, maybe 20 more rounds, it would be much harder to attack. In general, I think we should have more rounds rather than fewer rounds. Yeah, thank you. So on your last slide, you mentioned that you believe that ROC-CA with SHA-1 may be possible. But I believe that after the ND-51, you had new requirement to have random bytes outside your number at the beginning. So you think that some CAs don't respect that and that you can find one that would let you do the attack? Exactly. If the serial number is properly randomized, then it will not be possible to attack it. So I don't know. I haven't looked at CA, but who knows. It wouldn't be completely surprising to find one that doesn't do it properly, right? I believe this collision attack or even stronger collision attacks do not apply to HMAC SHA-1. Why do you recommend that we deprecate that? Absolutely. So far, there's no known attack on HMAC SHA-1. But SHA-1 is still a broken hash function in general. It doesn't provide the security we expect from a good hash function. It's been broken for 15 years. We have much better alternatives. So I think it just doesn't make sense to keep using SHA-1. I mean, it's the same about MD-5. HMAC MD-5 is also not broken, but I certainly wouldn't recommend using HMAC MD-5. Okay, let's thank Geeta again for a great talk. Thank you. The last talk of this session is entitled Adept Secret Sharing. Joint work between Mihir Balare, Wadi and Phil Wagoway, and Phil will give the talk. Good morning. So I think, oh, it's resonant. I think everybody knows the classical secret sharing problem in this audience. A dealer has a message M that it wants to share out. It splits it into a collection of shares. At some later point in time, some sub-collection of these shares are gathered up. Shares that are absent or marked as absent, and you can reconstruct the original secret that had been shared out. So this is a classical problem of cryptography dating back now more than 40 years, going back to Adi Shamir and the late Bob Blakely. Formalizing this problem is easy and well known. There's only one security property that we're after. It's this very seemingly strong form of information, theoretic privacy, that when we share out a message and project down to any non-qualified subset of shareholders, that the distribution on shares that they see should be information theoretically independent of the message that was shared out. Very simple, very classical, and you would assume therefore that there's very little that could possibly be missing, wrong or unsaid by now about classical secret sharing. And I'm gonna make the claim in today's talk that that's really not true at all. That even for a primitive, as well known and fundamental as this one, there are some really basic things that are not right when you want to employ this primitive for a bunch of people to do exactly what the primitive says to do, share out a secret and later reconstruct it. And I think this claim is in some sense all the stranger and stronger because there are by now numerous variants of classical secret sharing. And the claim I'm really making is that none of them do the job that I think you would routinely want done when a human being is sharing out a secret to a bunch of shareholders. I didn't intend to work on secret sharing. I had done one paper on this with Mahir Balari back in 2007. It was a lot of work and didn't seem to be much appreciated. But I kind of got fucked into it from an unexpected meeting that Alex Halderman had arranged at the University of Michigan. He had a visiting journalist at the time. Laurent Richard was his name. And Richard explained, well, the quite extraordinary number of journalists that were being killed as a result of carrying out their work. And he wanted to set up some kind of NGO in which journalists who were doing dangerous investigative journalism would turn over their key material or their primary materials to this organization. And if the journalist should later be killed or imprisoned, then the organization's journalists could step in and carry on the work that the imprisoned or dead journalist had been carried out. And the hope was that this would disincentivize the killing or imprisonment of journalists. By exploiting what's called the Streisand effect. This was this well-known case in which singer Barbara Streisand complained vociferously about some pictures of her Malibu estate being put online. And so it had actually attracted much more attention to the issue. So the idea is if you kill a journalist, then maybe it will get worse for you. So as one small part of this, Richard had imagined that maybe the journalist needs to share out their secret or primary materials. And I think Alex and I were quick to say, oh, we have a solution for this. It's a classical problem of cryptography, secret sharing. But as time went on, I recognized that there were actually lots of mismatches between the way that we had formalized as a community this classical secret sharing problem and what would actually be needed. And I'll enumerate some of those. Here are a few examples. So suppose the dealer wants to be giving out shares, but this happens over an extended period of time. Maybe I give one share to a colleague today at this conference and then I fly out to New York and I wanna give another share of the secret to another colleague later on and perhaps more. Well, how do I accomplish this? Secret sharing as it is invariably formalized is a probabilistic process. I need to take my secret, flip some coins, and now I get this vector of shares. But I don't want to retain this vector of shares over the next several days as I travel around and give them out. That would be an extraordinary vulnerability. If the thing that's being shared out is, let's say a pass phrase that's in my head, then I really want that in any moment that pass phrase or information related to it is either in my head, a share is with some shareholder and that's it. We can't maintain the randomness on my device over an extended period of time and we certainly can't maintain the shares or the original secret either. In a similar spirit, if I've given out shares of my secret to a bunch of shareholders and one of them should lose their share, it might be nice to be able to regenerate that share without attempting to reconvene the set of shareholders who might well be inaccessible. In short, there seems to be a need for at least the option of regenerating shares which implies a kind of deterministic secret sharing which is the exact opposite of the traditional formulation. Here's a different scenario. You are reconstructing shares and perhaps by accident or perhaps by adversarial means one of the shares has gotten corrupted. Well, with classical secret sharing, when you apply recover, you get a secret, always. And in fact, with something like Shamir secret sharing, if you get to see the other shares, then you can arrange your share, if you're adversarial, to reconstruct any secret that you desire. This is the exact opposite of the kind of authenticity guarantee that one might hope for in a secret sharing scheme. Finally, here's a kind of difficulty in which one of the shares again gets accidentally or maliciously corrupted, but the other shares have more than enough information to reconstruct the original secret. This is the problem of robust computational secret sharing. First put forward by Kravicek. But robust computational secret sharing comes with lots of caveats. For one thing, it's only achievable in situations like an honest majority. If you share out four shares, such that you want at least two of them to recover the secret, then robust computational secret sharing will say nothing about what recover does when you corrupt a single share, as odd as this may seem. Similarly, if you have, let's say, a two out of three secret sharing, even with a robust computational secret sharing scheme, if we have corrupted two of the three shares, again, no guarantee is issued. In short, the formulation for this primitive doesn't seem to actually capture the style of robustness that we want, that when it's possible to recover something, we do our very best to recover it. So in lieu of scenarios like this, my colleagues and I suggest a new syntax, a new API for secret sharing. Instead of just having this probabilistic share algorithm and a recover algorithm that recovers a message, we add quite a few additional parameters. First of all, the sharing algorithm now takes in the coins rather than having them operate internally within the algorithm. If you wish, you can still provide perfectly random coins to the sharing function, but we'll make sure that we achieve strong security guarantees even if those coins are fixed or if there's something like a random value followed by a counter or even just a constant when the message itself has adequate entropy. We also add something like the associated data of authenticated encryption, so that when you recover a secret, you are also sure that this side piece of information is as it was at the time of shared distribution. Finally, we want that when you share out a message that you're able to specify the access structure. This seems completely natural and standard except that the standard formulations of secret sharing are specific to an access structure. They don't, in fact, envision a world in which the access structure is an input to share. That's actually in contrast to tools like Thunder, which I had up a few slides ago, in which the user at the time of sharing specifies what access structure is wanted. Specifying the access structure also has security implications. You want that it effectively be authenticated, so when you recover a message, you're sure that the message that's being recovered is being recovered with respect to the exact access structure that had previously been used to share it. The recover algorithm no longer operates on a vector of shares because, again, this doesn't match the human scenario. If a bunch of us get together and try and reconstruct our secret, there's no natural sense in which I am player number two and you are player number three. We're a set of people and the underlying syntax needs to reflect that difference. Given this set of shares, we want you to recover not only the underlying message, but also to identify which shares are bad and which shares are good. This will be important in making the other security properties meaningful. Finally, it's important that when you apply, recover, there's the possibility of the algorithm saying no, there is not a valid secret underlying this set of shares because that's a much better thing to have happen than for the recovery algorithm to identify some kind of bogus message. On top of this new syntax, we'll describe three different security notions. One is called privacy, which roughly says that unauthorized sets of shares have no computationally extractable information about the underlying secret. An authenticity or binding condition that says once you're in possession of a share, the underlying message associated to that share is fixed. Someone can't give you a share and later on make it one thing or another thing depending on their whim. Finally, we want an error correction condition that says that you will do your very best to recover the underlying secret that is present in a collection of shares even if some of those shares are messed up. If there's not an underlying secret that you can point to, you need to identify that and say so. There's a huge universe of possible properties that you could try and define for a secret sharing scheme. And so I should mumble a few words about why we came up with these particular ones. It was a very long process. The definitional enterprise often involves lots of competing considerations and inputs and this particular exercise was no exception. So we started with these real world problems and scenarios thinking about what a bunch of human beings, particularly journalists or whistleblowers, would want in a tool that achieved secret sharing. Maybe a tool that was a user facing tool, some kind of program you interact with or maybe a general purpose library that is going to be used in a wide variety of different contexts. We also reversed engineered an existing artifact. I'd mentioned Sunder. This is a now discontinued product from the Freedom of the Press Foundation and we looked at some of the capabilities that its designers had put in. They had come to many of the same conclusions that we had, things like the need for this associated data for specifying access structures as part of the sharing process and for treating the recovery process as operating on sets. They also wanted some kind of authenticity or binding property. I wanted a definition that would correspond to strong intuition. I guess you'll have to decide whether or not these notions correspond to intuition you yourself possess and quite strongly we reasoned by analogy because secret sharing is really very much like symmetric encryption. In the symmetric encryption setting, you have a message M that you want to encrypt and classically you would flip a bunch of coins, take your ciphertext, transmit it across the channel and recover the underlying message from that ciphertext. Well, in classical secret sharing, you take a probabilistic sharing algorithm to produce what corresponds to a ciphertext and is actually just a vector of shares. You transmit or retain that vector of information and then you recover perhaps from some sub-collection of that information, hopefully the underlying message. And because of the similarity between symmetric encryption and secret sharing, all of the things that we've learned about making symmetric encryption more useful, they really apply to secret sharing as well but for some reason never got applied in that domain. So what has happened in the transformation from classical symmetric encryption to authenticated encryption? Well, first of all, the coins have been removed. We've taken the probabilism and replaced it by a knot. We'll do the exact same thing in the secret sharing context. We surface the random value and we do that in order that we can say stronger things about what happens when something that's not truly random gets dropped in there. The utility of associated data got quickly recognized for authenticated encryption and I claim that it's useful in this domain as well. In the thunder implementation, for example, the designers found that sometimes a user would accidentally intermix a share from one secret sharing scheme with a share from another by ensuring that each sharing was tagged by some identifier as to which sharing it was, playing the exact role of this tag or associated data, we could make sure that we were not trying to recombine incompatible shares. The access structure that's provided into share, I guess you could say even that is in some way analogous to what you see in real world encryption schemes where some kind of parameter specifier is actually specified as part of configuration of the symmetric encryption scheme. All right, I'm certainly not going to talk through all of this, but let me try to state what privacy is about. We want that unauthorized sets of shares reveal almost nothing about the shared secret, whereas classical secret sharing would achieve this and formalize this assuming that the underlying coins are random. What we replace this by is the assumption that the underlying coins and the message taken together have adequate entropy. This allows you to omit the coins completely if you're sharing a highly unpredictable document. Alternatively, it allows you to hedge that if your random number generator should for some reason fail, you still are falling back to a quite strong security notion. It also allows stateful kinds of sharing operations. That's privacy, a substantial strengthening of the classical privacy notion that we are in the complexity theoretic setting now, not the information theoretic one. Authenticity, alternatively called binding for us, is the property that a share can be used to recover at most one secret. The moment you get a share, in fact, even it's one produced by an adversary, there's only one thing that it might recover. A share is a commitment. Finally, error correction here captures the idea that recovery of a secret should succeed if there's a unique explanation for the set of shares that have been provided. When I say there's a unique explanation, what I effectively mean is that there's some subset of shares that implicates a message M and there's no subset of shares that implicates any other message M. In fact, the subset of shares that implicate M is also unique up to taking the maximal such set. So if you have that situation, recovery should make this best effort recovery of the underlying message and if you don't, the algorithm should say so. Here is a scheme that does it. I'm not going to or rather it does privacy plus authenticity. You have to add a little bit more to achieve the error correction condition. I won't talk through it all, but the idea is to start with a standard secret sharing scheme and to embellish it appropriately with symmetric encryption and with moderate use of a hash function modeled as a random oracle for purposes of the privacy proof. In about half an hour ago, Jean-Philippe told us to beware of too much crypto. I'm going to tell you to beware of crypto with too few guarantees with too little functionality. I think we've lived with a notion of secret sharing for 40 years now that doesn't actually accomplish what ordinary people would need if they sat down and decided to actually do secret sharing. And I suspect this has strong implications on the extent to which this primitive is used. Thanks very much for a very thought-provoking talk. Unfortunately, we are out of time and so we have to take questions offline. I now hand over to Nigel. Hello. Oh, that works. Brilliant. You stand there a minute. You'll see in a minute why you're important. Right, so this is the Lightning Talk session and way it will work is that people line up there. They're going to come up. They're going to come up the stairs. I'm going to hand them the light. They're going to talk until I tell them to get off and then they will get off, right? This is very important. Now, to explain how this works and to give you a small demonstration, I have a glamorous assistant, Brian, who we're going to show, right? So this is how it works. Brian, would you like to give a Lightning Talk? You have to come over here and hand it over here. Okay, so this is a lightning. This is a sample Lightning Talk. Hi, I'm Brian Lemakia from Microsoft Research in Redmond, Washington. I need an assistant for this talk, and so I'm going to call up Marina Sinusi. Would you come on up from Cornell University? Okay, how many people were here at RWC last year in San Jose? And you all remember that last year we set a record at RWC for the single largest attendance at an IACR event ever. That was 642 people. Marina, you are registrant number 643 for RWC 2020, setting a new record for RWC and for an IACR event. We're actually at about 645, 646 now. In honor of your record-breaking registration, I have for you a prize sponsored by Microsoft Research Security and Cryptography, a complete set of cards against cryptography with the special Eurocrypt 2019 expansion pack. So congratulations on your registration, and thank you. There is another set here, which is going to Nigel and stay around for the end of the lightning talks and maybe that one will be handed out. And if you would like your own set and you're a PhD or a master's student in a PhD program, you could come intern for me at MSR Security and Cryptography because I make enough decks that I hand decks out to everyone who interned for me and we pay you really well and you can work on really cool stuff. And I put out on Twitter this morning a link to the description of our summer 2020 interns program and an application and please apply or come find us. And now we're done and I hand it over. Okay, so that's how it works. Now, if any of these people here have a link that they would like to advertise, then what I suggest they do is they come up, they give their talk, and then what they do is they put the link on Twitter. So don't worry if you don't have to go www, you just put it on Twitter. So with no further ado, we start with the lightning talks. And go. Hi, I'm Riyad Wabi from Stanford. This is just a public service announcement. The Gallant Lambert Van Stone patent on using endomorphisms to speed elliptic curve point multiplication has expired everywhere except in the US. It will expire in the US on December 25th of this year. So get your endomorphisms ready and enjoy. Ready? And go. Hi, I'm Nick Sullivan from Cloudflare. I am also one of the co-chairs along with Alexi Melnikov and Kenny Patterson for the Crypto Forum Research Group. The CFRG is a volunteer organization that is part of the Internet Research Task Force, which is a sister organization of the Internet Engineering Task Force. You may know the IETF as the standards body that defines internet standards from TCP to TLS to ACME and as the authors of the RFC series. Now internet protocols need cryptography and the CFRG is the group that the IETF goes to for cryptographic expertise. The reason Curve 25519 is the default curve in TLS is because of the CFRG's work on RFC 7748, for example. The CFRG is currently working on a lot of new projects including pairing friendly curves, OPRFs, VRFs, hybrid public encryption, BLS signatures, PAKES, and a lot more. All these for use in internet protocols. And this is a great place to bridge the gap between theoretical cryptography and the real world. There are a lot of participants of the CFRG here in the audience. And if you're interested in getting involved, come see me, let me know, and I'll post a link on Twitter. Thank you. Thank you. Get up on the stage, yeah, nice round of applause and go. So I'm gonna be redundant and talk about standards. A quick survey, who here has ever used a standard? Hands up, hands up. Okay, that's great, it seems very useful. Okay, who here has ever built a standard? Not that many talks. Okay, maybe academia doesn't have the right incentives. Who here would like to see a standard built only by companies or government? Oh, okay, that's the answer. So I'm gonna give you a zero knowledge proof about the fact that a standard is being built in a community-driven way, together with academics and industry, in a very succinct way, zkproof.org. You know, it's succinct, it's zero knowledge if you're hearing this and you can extract the weakness if you go to the link, so please do that. We have a workshop coming up in April and I recommend everyone interested in zero knowledge to come and participate. Discussions are very important around this topic, especially now that they're being deployed in many, many places. We have a call for papers, if you have work, do submit it. And we just published with the editors, together with Errant Romer and Louise Brandau, the second version of the reference document, it's a 90-page document written in a collaborative way. Go ahead, give us feedback and suggest improvements. Thank you. Thank you. Go that way, go that way. Yep, there we go. It's safer. Hi, I'm Gabe Kavchuk. I'm a PhD student at Johns Hopkins. Our group together has put together a nice website where you can go and find all the recent publications and applied crypto because going to e-print and reading 1500 papers to find like 100 that are practical kind of sucks. If you would like to join our project, use our project, look at it. It's called Acrab as in like one crustacean, acrab.org. You can also find the link on Matt Green's Twitter because I'm assuming most of you're following him on Twitter. And please join us, it'll be a helpful way for our community to kind of condense the research that we care about the most, which is the stuff that can be applied in the real world. Thank you. And thank you, thank you. Go. Hello, my name's Frank Wiener. I work with CPO and we provide threshold cryptography solutions where we're using MPC to do multi-party signing and multi-party key management. But the reason I'm here today is to talk to you about the MPC Alliance. Quick show of hands. How many of you have heard of the MPC Alliance before? Great. We launched the MPC Alliance in November of last year and basically Unbound, CPO and Zengo got together and said, why don't we pull together a community of companies developing and people applying MPC in their business? Let's get everybody together, create an open industry forum where we can share information, share best practices, and also increase industry awareness about the adoption and use of MPC. So if you'll visit MPCAlliance.org, you can come online, gather information. It's an open registration. We've got our first kickoff meeting scheduled for next Tuesday. We're gonna be kicking off marketing committees and technical committees. I'll also be here till about 2.30 this afternoon. I wore this easy to find red sweater so you can find me. But please feel free to tag me if you've got any questions or visit the website. Again, MPCAlliance.org. Thank you. Cool, very good. Everyone, join up. We all want you to be in MPCAlliance. Right, go. Hi, this is Antoine Delina from MSR. I just want to say a few words about Qwik. So many of you may believe incorrectly that secure channels on the internet is served with TLS 103. There was a lot of effort going into formal proofs and analysis for TLS. And many people believe that Qwik is just essentially a networking thing, nothing interesting going on. I would really like to point out that it is not the case. In particular, during the course of development, Qwik has actually introduced many changes to TLS, including changes to its packet encryption. It has completely replaced the TLS-regarded protocol. It has completely changed some of... It introduces on-key derivation, modified some of the on-check message like under-for-the-data. And none of these changes have received really much analysis. So this is really a call for people to really start worrying because many people in the ATF Qwik working group are actually pushing hard to standardize the protocol, even though there are clearly some big gaps in the security analysis and things that are clearly missing, like version downgrades, which is why we started to... We proposed a workshop at NDSS on the security of Qwik. And I encourage everyone to not only attend the workshop, it's not too late to submit, but also to look at the security of Qwik. Thank you. Brilliant. Exactly on time. And the next one. Hi, my name is Andrew Knox. I'm here representing Facebook, and I am really happy to announce that we're going to release a request for a proposal for applied cryptography in a privacy-focused advertising ecosystem. So the internet is largely supported by advertising, and privacy-preserving in ads is a very... It's a very large space, barely explored. There are big real-world impacts, really motivated implementers. This is a huge opportunity. And so there are challenges in recommendation, machine learning, causal inference, opportunities for multi-party, zero-knowledge, homomorphic. So there's... I'm trying to... It's a very big opportunity. There's lots of stuff here. So we're offering about 10 grants of about $60,000 per award as an unrestricted gift for people that are interested in researching, advertising implement... Advertising with cryptography. This is a follow-up on our recently closed RFP that we offered at CCS. So if you applied for that and didn't end up getting an award, please reapply. And we also have a lot of positions available for people who are interested in coming on board. So we have these cute little info cards that have little privacy screens on them. With more information, you can check out research.fb.com or talk to an FB employee. We'd be really excited to talk to you about it. And we're going to host a happy hour if people want to chat with that and they're going to the event tonight. Thank you. Happy hour. What more can you want out of crypto? There we go. Hello, everyone. I'm Luis Brandão. I'm with the Crypto Group Ethnist. We are currently interested in standardizing threshold schemes for cryptographic primitives. That essentially means enabling distribution of trust over several parties when you want to perform signatures or public key decryption or symmetric encryption or even random number generation. We currently have a preliminary roadmap online. It's open for public comments until February 10th. And we would really like to have your feedback and engaging with whoever is willing to collaborate with us. Thank you. Brilliant. And the next one. Hello. I'm Jalan Romaye, working at Turkish Security. And I don't know if you have heard of functional encryption. But if you have, maybe you think it's super slow, it's not practical and everything. Well, there is a research project in a raw pack called Phantek which tries to change these. And we just released two different libraries on GitHub and github.com slash phantek-project. And I'm here basically to tell you to please try and use functional encryption in practice. It might be more practical than you think. Okay. Hello again. Small announcement. This is to announce a new end-to-end encryption protocol and software that we released this morning. You can look at the Twitter of my company, Tesseract, the sponsor somewhere. We're not competing with Signal, Wire or OTR, whatever. It's end-to-end encryption in the IoT machine to machine context, which is a slightly different problem than mobile to mobile. So check it out, find bugs, find attacks, and let us know. Thank you. Brilliant. And talking of these guys up here, if you want to join them, if you're a company, you're missing out if you're not up on this screen. So please talk to me at some point and we'll put your name up there next year. Hi, I'm Nadim from Symbolic Software. So protocols are a big deal. We use Signal, we use TLS all the time. And sometimes designing protocols can get really complicated. That's why for the past 20 years or more, people have been coming up with all these tools for formally verifying protocol designs. However, these tools can also get pretty complicated. And a lot of these tools that let you illustrate and reason and formally verify and prove security properties about protocols can get really hard to use and require specialized knowledge. That's why I'm working on a new free open-source software project called Verif Pal, which is supposed to be easy, friendly way for you to think about complex protocols like Signal and TLS, Keybase, whatever, 5G, and actually write them down, model them, reason about them, and actually verify security properties about them. So please check it out. It's V-E-R-I-F-P-A-L, verifpal.com. And I also have stickers in this pocket, which contains other things. Okay, here's a sticker. So come get a sticker. There's also an instruction manual that has a full Japanese manga where Verif Pal goes on adventures, which is really interesting to a lot of people. Thanks. Cool, thank you. And the next one, go. Hi, I'm Babel. I'm a software engineer with Google working on certificate transparency and key transparency and a bunch of other transparency projects. Over the years of working on these kinds of transparency systems, we've developed a system called Trillium, which you can find on GitHub. It's basically a generalization and extension of ideas behind certificate transparency. One of the primitives that it provides is a temper evident cryptographically verifiable log or ledger. So if you're interested in technologies like this, you can talk to me later today and also reach out to the team from the links on the GitHub. Thank you very much. Cool, thank you very much. Okay, let's go. Hello, everybody. My name is Xingqiu. I'm representing CommScope. Probably not many of you heard of CommScope, but my organization is a PKI center. We started with the general instrument. Then we become Motorola and then Google bought the Motorola Mobility and then Aris then later acquired the PKI center and also the home division. So then Aris also acquired the pace and the rockers and then last year we become CommScope. One thing going over these years, the different organization changes. We worked with different supply chain and the OEM and the factories. So we know the supply chain problem in handling the PKI. So it really is different than handling the transistors and the batteries because you can only put those elements into one device. But the PKI and the keys and the certificates and then you see the cloning problems and the auto security problems at the supply chain side. So we develop the system and to ensure the supply chain. So if you guys, you know, anybody has a problem with the cloning problem at the factory, we really know how to do it. Another thing I want to let everybody know here and we are hiring apply the security engineer in the system side. So if anybody interested, please let me know. Thank you. Cool, thank you. And Dan. All right, shorter replay of my lightning talk from last year. We once again have some United Airlines drink vouchers. These expire at the end of January 2020. So if you have the misfortune of flying United Airlines this month, then just come see me and get your drink vouchers. That's an offer you can't refuse. Jppy again. Hi, I'm Jack O'Connor. Seven years ago, NIST selected Shastri as a hash function and back then I was a bit disappointed because Blake was not chosen. So we designed Blake 2, which was much faster than and many people used it and loved it and it's now in up and seven and many places. But Blake 2 might still be a bit too slow. So we're happy to announce Blake 3 today. That's an intro. Hi, we're announcing Blake 3. It's a new hash function. We literally just pushed the repose public on GitHub and pushed some crates to crates.io. It's fast. On my laptop, it's about three times faster than Blake 2B. On my AWS web server, those nice AVX 512 vectors, it's about four and a half times faster than Blake 2B. The command line utility that we've created, B3SUM, which is available right now on crates.io, cargo install, B3SUM, try it, does multi-threading by default because Blake 3 is a multi-threaded hash. On my laptop, when I compare that to shot to 56SUM, it's 20 times faster to hash a large file. The reason it's fast is that it's parallel. The reason it's parallel is that it's a merkle tree on the inside. Professor Merkel was here yesterday. I don't know if you've seen the audience. That's exciting. It supports verified streaming because it's a merkle tree. You can verify the hash of a video while you stream it over the network. And it's the other big difference from Blake 2 is it's one function. It's not a family of functions. So you don't have to make any hard choices. It's Blake 3. And it's a joint work with Samuel Neffs and Zuko. Thank you. Brilliant. And the next one. Hello, everyone. My name is Philippi Wanwitsch. And I joined recently University College London's Information Security Research Group. Yeah, that's quite a mouthful. And I wanted to announce that we are looking for PhD students who are interested in doing research in security, cryptography, and privacy. And we are funding through a new center for doctoral training in cyber security, regardless of residence or nationality. The next application deadlines are January 31st or May 15th. And if you want to have more information, come find me afterwards. I'm still around today and tomorrow. Thanks a lot. Thank you very much. And go. Hello, everyone. I'm Yuval Yaroni, University of Adelaide in Australia. I just want to let you know that I have an open position for a post-doctoral researcher working officially on the software solutions on two side channel attacks, unofficially on anything that I find interesting. So if you want to spend a year in a place where everything that moves can kill you and where the laws of mathematics do not apply, please contact me. Hi, I'm Paul Crowley. I do cryptography for Android. I'd like your help making the phones we sell in the phones that are sold in countries like India more secure. This talk is actually a repeat of a talk I gave in fast software encryption 2019 lightning talk. I announced a prize for research into fast target collision resistant functions. But then I failed to actually put any information about the prize online or anything. So oddly enough, I didn't get any entrance. So I'm trying again. The information is now online. It'll be linked on the Twitter account. Please help me make the cheap phones more secure. Thank you so much. Thank you. Lesson, put your stuff online. OK. Go. Thank you. Hi, I'm Philippe. And I work on the Go Cryptography Standard Library and on the security of the Go language ecosystem. Last year, we designed and shipped the Go checks on database. It's a binary transparency system that stores the checksums of the source of all Go modules. It's inspired by certificate transparency. But unlike CT, it has no signed timestamps. And the clients verify the signed tree heads themselves. We also designed a system to deliver cacheable parts of the tree so that the clients can verify proofs efficiently. We welcome review. And if you have a package ecosystem, we suggest using something like our design to make sure that the contents of a version are immutable, which is a much more user-friendly solution than asking authors to manage keys. You can find it by searching on Google for the Go checks on database. And it's a joint design with Raskox and was implemented by Katie Hawkman's team on top of Trillion, which you heard about before. Thank you. Brilliant. Thank you very much. And we're down to the last three. Hello, I'm Jack Corpstrad. I'm from the Lich Coin Company. And I'm here to tell you that encryption is hard. If you go and tell someone what they want, you know, how to encrypt something, you're generally the two answers you'll be giving are useGPG or copy this line from Stack Overflow. Neither of those are particularly usable. So I'd like to tell you about a new encryption format and tool called Aghe. It is designed by Flippo Valsorda and Ben from Cartwright Cox. And it is designed with usability at the forefront. So it has no configuration, small keys. It is designed with a minimal well-oiled joint, which we are going to keep that way. And underneath, just boring primitives with seekable streaming encryption schemes that enable you to do the things that you want to do safely and securely. We have implementations in Golang and Rust leading off and more coming. The specification is at aghe-encryption.org slash v1. If you want to have a look, it is currently in beta. And this would be the perfect time for people to have a look at it, give some feedback. And just to give an indication of how short these keys are, my public key is aghe-18f63qx4gk8x. Oh, for God's sake. All right, come. All right, hi, everybody. My name is Kinan. If you enjoyed Marcel's talk in the morning about MPC frameworks, do check out my framework GIF, which was mentioned in that talk. It's super cool. JavaScript implementation of federated functionality. Yes, it's an MPC framework in JavaScript. It's not totally a bad idea, faster than you think. Information theoretically secure, so you don't have to worry about a lot of crypto. It's a lot easier to use than you think. It's very cool. You can include it in your web applications. We have a lot of documentations and tutorials. I promise you we've already deployed those 200 companies earlier this year, worked like a charm. And we expect the paper and probably version one to be released in February. So check this out. Thank you very much. And the final one. Hello. I have three questions for you. By the way, I'm Arm from Arm. So do you want to do things that have an impact on the lives of billions of people? Second question is, can you think like an attacker? Third, do you want to work with a boss that on some occasion can even be funny? Then send me your resume. OK. And I'm sure, hello? I'm sure Roberto put the advert up. However, before you all go, there is something still left to do. There is the cards. Now, I don't know about you, but there was one speaker in the Lightning Talk session I felt very, very, very, very, very sorry for, mainly because he has to keep obviously flying united. So I think it recompense for having to fly united. He deserves some cards against cryptography. Dan, would you like to come up? I mean, he knows it's crap. Why does he keep going and getting these things? Yeah, it's this never ending nightmare. You can't escape. They don't let you escape. Don't get started on it. United, don't do it. And I commiserations for everyone who didn't get the cards against cryptography and also commiserations for everyone who's flying united tomorrow. And we'll see you after lunch and lunch is out there. Thank you.