 our next speaker up is Yolan and he actually told me not to say his last name because I would fuck it up. So, but he's going to talk about some pretty cool future crypto stuff. So, give him a round of applause. Hey, hello everybody. So yeah, I'm Yolan Romaye. I'm super happy to be here today to be talking about Dead Man's full yet responsible disclosure system, which is basically how to do cool shit with when you're able to encrypt towards the future. And so I'm Yolan. So I'm from Switzerland, which is not the same as Sweden. So we got chocolate and cheese fondue and they got IKEA and even more snow than we do. I'm an applied cryptographer at protocol labs on the distributed randomness team. But don't worry too much. It won't be about math and crypto as in cryptography too much today. Some of it, not too much. And yeah, let's get going. So I'll first do some intro to explain you what you need to know to really understand what's going on later on. Then I'll be talking about full disclosure versus responsible disclosure, which is, you know, quite an interesting topic in InfoSack. Next, we'll see what is time lock encryption, what it means. It basically means we can encrypt something towards the future, but we'll see what it actually means and what it enables you to do with when you have it. And then we'll see how to use it. I have a small demo and I'm pretty happy I'll be releasing three different tools today, so that's pretty nice. And then we'll talk about what can go wrong when we're talking about the future. So without further ado, let me start with the preliminaries. With a digression, I like this. I'm not going to do too many of these, but you know. So do you know what randomness is? According to the dictionary, randomness is simply the quality of being random. Great. Not super useful, though. I prefer to see randomness as being the quality of being unpredictable or lacking any pattern, you know. And that's a more useful thing to have when we do cryptography and computer science in general. And so everybody usually has some kind of intuition of what does random mean on what is random or what isn't. Like if I were to show you a binary string of 32 characters, which was only ones, and I told you this binary string was picked at random, I'm not sure you'd be super convinced, right? Because it doesn't look random. Even though all binary strings have exactly the same probability of being drawn at random, and it could actually happen. I tried a few days ago on my computer, it took 15 minutes until it was drawn at random when drawing random numbers constantly. So it can happen to have a 32 character, all ones, string when you draw stuff at random. But anyway. So randomness is usually odd. That's something you will hear cryptographers say quite often. Because actually a lot of the cryptographic schemes we're using nowadays to secure the web such as ECDSA or EDDSA and a lot of signature schemes actually, but also the Diffie-Ellman key agreements and so on, are vulnerable when you have a bias in the randomness that we're using. And that bias can be very tiny. A one-bit bias in a signature scheme can allow you a full private key extraction, which is basically the worst thing that could happen to any crypto system. And so randomness is hard in general. And to make it even harder when we're talking about computers, we're not actually using true randomness. Because there is such a thing as true random generators pulling, you know, data from actually chaotic events such as, I don't know, like bubbles in the water or atmospheric stuff and so on. But it's not what we're usually using in a computer. What we usually use is a pseudo random generator, which is pulling some entropy from whatever you've typed on your keyboard from the jitter of the network. But that's not necessarily super random and could be somewhat predictable. And one important thing about randomness, in my opinion, is to recall that there are different kinds of randomness. And so to understand what distributed randomness means, because I told you I was on the distributed randomness team, right, so you must have figured my time lock encryption scheme somewhat something to do with that. I need to explain you what are the different kinds of randomness. And the first two flavors of randomness we see the most often are actually public and secret randomness. Public randomness is, you know, whenever you, I don't know, play the lottery and you see people drawing numbers at random on the TV. They're drawing a random number that is meant to be public. Really just what it means to draw public randomness. It's a random number that is meant to be public. Secret randomness on the other hand is something that is meant to stay secret. So for example, when you generate a PGP key, you're using a secret randomness to create your key. Most of the time when you connect to a website, you'll be connecting with TLS. And actually TLS is creating ephemeral secret keys and generating nonces, number meant to be used only once on the IVs, initialization vectors and so on. All of these are secrets and they're meant to stay secret. But they're also often random. And so whenever you have a random nonce, it's meant to stay secret. And that's a secret randomness basically. So public randomness is nice. You know, you can draw a random number and show it to everybody. Sure. But if I were to run a lottery now today and I was selling you tickets and then my pal Patrick was in the room were to win the lottery when I draw the random number at random. I think you would be somewhat skeptical. You'd be like, oh, you cheated. And so there is a really nice notion of verifiable randomness that basically means you're able to verify the randomness was drawn properly and is properly random. And that's really useful, for example, if you want to be off the hook. So it's actually possible my friend Patrick won the lottery today, drawn at random, you know, because there are not too many people in the room. And if I used verifiable randomness, you could verify I was not cheating actually on the idea of the hook. So verifiable randomness is a very useful thing to have in general when we're dealing with public randomness. Next, we have the notion of distributed randomness. It's a difficult thing to achieve consensus when you have a large system, right? And there are different ways of achieving consensus. But it's even more difficult if you want to draw a random number and achieve a consensus among different nodes on a given random numbers without any nodes being able to predict it or bias it. And distributed randomness has a few different kind of solutions. And actually, my team is behind one of them, which is called DRUND. And DRUND is meant to be a public randomness service that anybody could use, just like you use NTP servers to sync the time on your computers or just like you use free public DNS servers to resolve domain names. So we have a lot of data that the internet really needed a public verifiable distributed randomness service. And that's what we tried to create and to launch. So DRUND is basically just software. It's open source. You can check it out. And it's using pretty cool threshold cryptography based on bearings, specifically the BLS signature scheme to generate randomness in a way that's verifiable. And DRUND has actually been deployed in practice by the League of Entropy, which is a team of 16 different parties and organizations running 23 nodes. And the cool thing about DRUND is it's using threshold cryptography. So you don't need to trust any of these parties as long as you trust that there is never a threshold number of malicious nodes in the network. And so here I can see the risk cloudflare, protocol lab, but also universities, security companies, Kudalski security, and yeah, a lot of different parties that are not likely to collude. And so the threshold being currently 13, it means you can have a fairly good trust in the network, not to collude and do nasty things. DRUND has been running for two years by the League of Entropy and it's really solid. And yeah, so far so good. So now that we know what distributed randomness is and that there is a service out there providing it for anybody to use, we can dig into the title of my talk, I guess. So you all know about full disclosure and disclosure, I guess, but I'm still going to walk through the different kinds of disclosures that are out there. So disclosure is basically what you do when you find some vulnerability in a software or in a product or in a service and you want to disclose it either to the bundlers, the creators, the coders, or to the public whatsoever. And according to OWASP, the open web application security project, there are actually three different types of disclosures. But I think they're wrong. You know, there is a false one, the non-disclosure where you just find something cool and decide to, I don't know, use it for fun and profit or whatsoever. Sure, that's a way to do things. The other types of disclosure are the full disclosure where you find something cool and you're like, hey, listen, I found something cool. Here is a zero day. Anybody can use it. And here is a proof of concept too because I'm nice. So you can really weaponize it directly. That's a way of doing things. Please don't do it on Fridays. It's really mean for the security teams. No, truly. Then there is the private disclosure. Nowadays we have a lot of bug bounty programs which, you know, give you a reward if you find a vulnerability in somebody's product. But often these bug bounty programs forbidding you to release your findings if you want the reward, which leads to a lot of private disclosures. I'm not convinced because I'm a cryptographer so I don't believe security through obscurity is a good thing. And instead what I would prefer to have is a responsible disclosure, for example, which is basically just when you do private disclosure first and then you say, hey, listen, in 30, 60, 90, six months, I don't know, 90 days, six months, I'll be releasing my findings for everybody to look. So you have some time to patch, but hey, I'm still going to release it, do a blog post, I don't know, go to DEF CON to present it and so on. And so coordinated disclosure are actually quite used in the industry. If you look around, you can see, for example, Google Project Zero, which is doing a lot of vulnerability research, and they're finding a lot of vulnerabilities, and they're also always using a hard deadline policy with their disclosures, which is basically that you have 90 days to patch your product. And if you do, they will give you an extra 30 days to be able to prep your blog post or your communication whatsoever. But if you don't patch within 90 days, they will release it publicly for anybody to be aware, to help people protect themselves, basically. And that's actually quite effective. According to their own metrics, there are only 3% of their disclosures that are not patched within 90 days, which is great. So it means most people, most vendors out there are actually using their time to patch effectively to patch their software. 3% of them, though, are not. So I don't know what they do. Maybe they're just ignoring the vulnerabilities whatsoever, but yeah, they don't patch. So let's recap. A coordinated disclosure or responsible disclosure timeline basically looks like that. So let's say you're finding something on January 1st, you know. You take a few days, you write a report, you create a proof of concept whatsoever. Let's say mid-January, you're disclosing it privately to the affected vendor. Then they come back to you early February because I don't know, the people responsible for it were on vacation or they were slow at reading their security at mailing list whatsoever. They come back to you and they're like, oh yeah, you're right, it's a vulnerability. So thank you, we will patch. And then you could talk to them and be like, eh, I would like to, I don't know, go to DEF CONDOM and present it. So how about doing a responsible disclosure and so on. And you can agree on a given release date, you know, in the future, maybe unmade the force, you know. You could say, hey, I'm going to release everything on my blog, unmade the force. And if they're, you know, worn, they can patch and they have some time to patch. Which is nice. And then you can unmade the force, you can release it whatsoever. However, there is a small issue with that. I don't know if you're familiar with the notion of bus factor. But the bus factor of a project is basically the number of people that need to be crushed by a bus before the project is critically impacted. And with a responsible disclosure, it's very likely you just disclosed it to the vendor. And then you keep it to yourself until the agreed upon release date, right? So here, during that time to patch period, you're actually having a very low bus factor, maybe even a bus factor of one. And so it could be the case that some vendors in these three persons which are not patching within the time to patch are actually more malicious than we thought. And instead of shooting the noob at DEF CON, they could try to, you know, actually shoot the noob. And that's a bit annoying because if you're careful or if you wanted to prove you found something cool, maybe you've published on Twitter a hash of your findings, right? So you publish the SHA-256 sum of your findings on Twitter. And then at a later date, you'll release the paste bin with the text and anybody can verify it's the same hash. So anybody can check. You were actually the one to have found the issue, I don't know, on January 1st, which is a nice way to do things but which also mean you need to be alive and made the force to be able to release the report, right? And that brings us to the time to patch issue, which means somehow we might want to have some kind of dead man switch so that if we're not there anymore, our findings would still be released, right? And that's a very good use case for time lock encryption. So what is time lock encryption? Time lock encryption is very simple. It's basically being able to encrypt something today that cannot be decrypted until a later date, maybe Christmas or maybe tomorrow. And then you can just release it, decipher text, and anybody can try and decrypt it, it won't work. And tomorrow when they try again, it will decrypt and it works. Seems a bit like magic, right? It's also sometimes called time lapse encryption or timed release encryption. These are all the same thing. And it has pretty cool applications. You could use it in auctions to do seal bid auctions, for example, or you can use it also, I don't know if you're running a blockchain, but you could use it to prevent MAV issues or you have miners trying to grind the mining process to extract more value from the blocks than just transaction fees. You could use it for a cool conditional transfer of wealth thing, which is basically you encrypt your bitcoin private key for, I don't know, in two years. And if you die within the next two years, your children will be able to get your private key in two years. And if you're still alive, you can just transfer your bitcoins to a new address and nobody can extract them anymore. You could also use it for, I don't know, electronic voting, for example, or more interestingly, also somewhat related to electronic voting. You could use it to protect documents that have a known embargo period, like legal documents that you must release six months after the deed or something like that. It could be very useful to do these kind of things. If, you know, you're attending DEF CON, maybe you have other funny ideas, you could use it to do very well-behaved run somewhere, which instead of encrypting your files forever would encrypt them for, you know, six months. If you're in a hurry and you want them today or tomorrow, please pay. Otherwise, you just need to wait six months. Fine, right? I mean, I would love a more fair run somewhere. It's almost, you know, a nest. If they do that. Also, there is a cool paper that was released a few years back about using time lock encryption to prevent emulation in antiviruses. You know, when an antiviruse is trying to emulate your binary to see if it's doing something fishy, well, you could use time lock encryption so that the antivirus cannot see what's going to happen when the payload gets decrypted because it's not the right time yet. These are all cool IDs. And actually, time lock encryption is a pretty old ID. It was first proposed by Tim May in 1993 on the Cypherpunk mailing list. So for those of you who don't know Tim May is a pretty cool guy who actually was the father of the crypto anarchist movement. So yeah, cool guy. I actually introduced the ID along with a way of solving it which was basically to give decryption keys to notaries, which is basically just trusting somebody with your decryption keys. Not amazing. Three years later in 1996, Ron Rivest, Adi Shamir and David Wagner, so you might have heard of the two first guys because they're behind the RSA crypto system, I guess, they proposed the proof of work-based system, which was called time lock puzzles, where the ID is basically that if you can force somebody to do a certain amount of work sequentially on your computer, you can make sure they are not able to decrypt it before the right time has come, right? And they also said in that paper there are only two ways of doing time lock encryption, either using proof of work or using trusted third parties. And actually I implemented it in practice and Ron Rivest published a time lock capsule in 1999, which was meant to last for 35 years, including the more low, so including the fact computers would get faster and faster. He was pretty sure it was solid and yeah. Well, naturally. Only 20 years later in 2019, the puzzle that was supposed to take 35 years to solve was actually solved by two different teams. One of them was actually one guy running the thing for three and a half years on his Intel CPU. And even though it was supposed to take 35 years, it took only like a tenth of that. Because the computers were not that fast, you know, not as fast as Rivest thought they would be in 2034. But just the squaring process he was using to protect his puzzle was not as slow to do. Another team, which was actually a collaboration of Ethereum Foundation, Supernational and Protocol Labs, was able to do it in only two months using FPGAs with very low latency circuits doing square rings, which is, you know, way too fast. So it means using proof of work is not amazing. And then there is a whole list of people who also did research, the time lock ID, and they came up with other ways to do it, like using Bitcoin proof of work, using fancy cryptography based on obfuscation, using amorphic stuff. All of these are really nice IDs, but they are not practical at all. So it means to date no practical way of doing time lock encryption besides proof of work. And I don't really like proof of work. I mean, it's burning the planet and it's not super nice. Also, if you get faster hardware, you could break it faster, naturally really able. So that's what we were able to actually solve. And so our goal here is to encrypt towards the future, right? So it would be really nice if we had a cryptographic reference clock ticking, I don't know, every 30 seconds, for example, and that you could say, okay, I'm going to encrypt towards the run number, I don't know, 10,000. And you know run number 10,000 will be in two days, for example. And that ID of a cryptographic reference clock was actually already introduced in a paper in 2017. And that's a pretty nice thing. But they never really created a practical reference clock. And that's where DRUN comes in. Because DRUN is basically releasing random values every 30 seconds. And it has been doing so for two years. And you can trust the, I mean, if you trust there is never a threshold amount of malicious nodes in the network, you can trust it will be releasing randomness on time for, you know, the foreseeable future. And so what we can do now is basically take the DRUN rounds and map them to a given specific future time. And we can rely on the BLS signatures of the DRUN beacons to do practical things. And I told you earlier that BLS was a pairing-based signature scheme. And pairings are really nice because there is also the notion of identity-based encryption that is based on pairings. And with that, we can say we want to encrypt something towards a specific message. And whenever the signature for that message is released, we will be able to decrypt it. And so that's the magic of pairings, basically. So if you want the math, here it is. So pairings basically a billionaire map from two groups, G1 and G2, onto a target group, GT. And I told you it's a billionaire. So it means that if you take the pairing of G1, the generator of the group G1, and a signature P, a signature P is basically the secret time the message when you're using BLS, but on G2. If you take the pairing of G1, the generator with the signature P, you can actually say you know it's equal to the secret time the pairing of G1 on the message. And that's really nice because anybody can compute the pairing of G1 on the message. But anybody can also compute the pairing of the public key of the BLS scheme, which is PG. And the public key is basically the secret time the generator on G1. Oh, what happened? That's a bit annoying. It seems the screens are frozen, actually. Can I have the AV guy checking? So yeah, it seems it's completely not displaying the right thing. Yeah, the big screen is not updating anymore. So I don't control the big screen. Yeah, no, it's not. If you look here, the screen is not. So I think it's a back-end issue. You can leave it like that. Can you reboot the screens? Okay, it seems to be back. So I'm not sure what you saw all the time. I mean, has it been frozen for a long time? Okay, so you didn't see any of the white slides? Did you see that? Okay, yeah. So what I was saying must probably have been very strange to you, anyway. So that was the timeline, you know, when you want to encrypt towards the future. That's a really nice thing where you can use G1 to map to specific rounds. And the nice thing is that we use pairings. And here is the math. And you can check the math or you can trust me. It works. And this is even more math. Pairings are really cool. They allow you to do really cool sheets. And if you download the slides, there is even more details at the end of the slide deck. Anyway, there is one problem, though, is that we need to be able to predict the message that is going to be signed in order to encrypt towards a specific message in the future. And so one thing was that DRUN was using chain randomness. Where every round was actually linked to the previous round. So it wouldn't work too well. But actually the security assumption behind it is that there is never a threshold of malicious node in the network. So we could just unchain it. We didn't need to have chain randomness. We can just sign the message, which is the run number or the hash of the run number. And that will work with the exactly same security assumption as we did with the previous version. And that has been just actually released on testnet a few weeks back. And it's coming to mainnet in mid-September. So there is another issue. It's that if you want to encrypt very large files, you can't really. Because the time lock scheme we came up with is only able to encrypt small blocks, maybe a thousand bits top. So the easy way out is that you can encrypt with AES, which allows you to encrypt gigabytes and gigabytes. And you just encrypt the secret key you are using with AES. And that's exactly what PGP does when you're using PGP to encrypt files. And so it's a really nice solution whenever you need to encrypt lots of data. With that, we've actually created two time locks libraries, one in Go, another one in GS, which allow you to try to encrypt stuff today if you want. With the libraries, we also are providing a CLI tool if you want to, you know, just like you are using PGP, you are able to use TLE to do time lock encryption in your terminal today, provided you go install on your machine. TLE is fairly easy to use. It's based on AGE, which is a fairly nice common tool to do modern public key encryption created by Filippo. And we figured it might be too difficult for people to use a common line tool for a demo, right? So we came up with a JavaScript library on the web demo. And so we can actually try the web demo if I click the link. So anybody can try it. It works on the phone too. Time vault, that's the name of the web demo, is basically just a way to encrypt your text, whatever text you have to encrypt or your vulnerability report if you prefer. And you can just choose a time in the future. So let's say 5 p.m., like 05. Okay. You can try to encrypt something. Okay. So if I copy-paste that in the decryption, it should know it should fail, because it's not yet 5 past 5. But if we wait just a few extra seconds, it should work, right? Because it will be 5 past 5. And so now it's 5 past 5 according to my screen. So if I try again, demo effect. Ta-da! And yeah. So we can see it works in practice today by relying on DRUN to provide the randomness. And too quick. And so if you need to use it in your project, or if you have cool ideas of stuff you could do with time lock encryption, please go ahead and use our libraries. You can even choose to use your own DRUN network if you don't trust the legal phone tropey. Everything is really easy to use. It's actually based on cryptography that has been researched since the early 2000s. So BLS, it's 2001. And identity-based encryption system we're using, it's 2003 and it's providing proper security guarantees. Now there is just one remaining problem with time lock encryption. So we've seen there is a cool way to provide a dead man switch basically to encrypt your vulnerability reports and instead of posting the chat 256 of your findings on Twitter, you could directly post the link to your paste bin with the cipher text. And in 90 days anybody could decrypt that cipher text, right? So cool. But the problem is we're talking about the future. So there could be new attacks, right? Somebody could come up tomorrow with a new attack against BLS and that would break all the cipher text that would have been encrypted with time lock encryption. So it could be annoying. Another big issue is that BLS and the IBE system we're using is actually relying on the discrete logarithm assumption which is known to be vulnerable to quantum computers. So if you want to encrypt something that is not meant to be decrypted until, I don't know, in 30 or 50 years, it's maybe not a good idea. So don't use it to encrypt your confessions or whatsoever. It could be decrypted earlier maybe. If a quantum computer is ever built that strong enough to break the scheme we're using. And also the fact that we're relying on threshold systems means it's fairly solid. You have a good liveness properties. You can expect the network to be up for a long time. But who knows? Maybe in 10 years, in 20 years, all of the trophy members will be, you know, gone. Who knows? So it might be possible your cipher text could never be decrypted if you're encrypting something for, you know, in 20 years. And also, what about governance? So the problem with the network that is meant, that is built with a lot of people is that suddenly maybe the legal from trophy members could decide to stop the network. And then what happens with the cipher text? So there is two options. The league could say we're going to release all key materials so anybody can decrypt everything now. Which is maybe not amazing. Or maybe the team members will, the league members will say we'll just destroy the key material. Which means all of the cipher text could never be decrypted unless a quantum computer is built. Which could also be annoying. But these are really like governance questions. So I guess the main solution is to have two networks. People could choose either the network that is going to release all keys if it ever goes down. Or people could choose the network that is never going to release any keys if it ever goes down. Yeah, that would work nicely. Finally, this work is a team effort, actually. So credits goes to the DRUN team, including Nicola Guy, who had the initial ID and is also the creator of DRUN. Patrick McClurg, who was behind all the JavaScript magic. Julia Armbrust, who was behind the Web demo design. And also I want to thank a few people who had very cool comments and who helped us with the project. So Justin Rake, Jason Dunnfeld, and also Arden Labs for helping us along the way. With that said, if you find time lock encryption is a pretty cool thing and you would like to help secure the network. The League of Entropy is looking for new members in other geographies, especially in Asia. So if you're used to run a high availability service and you want to try to join the league, please ping me. That would be nice. Also the DRUN team within Protocol Lab is hiring. So you can also ping me if you're interested in joining. We are looking for developers, backend developers and also security professionals like application security and cloud security. And thank you. If you want to see the code, it's on GitHub. We released it this morning. So it's there. And also stay tuned if you want all the details about how it actually works. Under the hood, we're going to publish a preprint and e-print in the coming two months. I'm probably going to be releasing also a blog post explaining all the whole thing works next month. At the same time as we'll be launching on the main net. Because for now it's running on test net, which has only six nodes, a threshold of six instead of 13. So the security is not as high on test net as it is on main net. But you can already use it today, trade out on test net. It works. And yeah, with that I think I might have a few minutes for questions. Yeah. Yes. Yes. So the network is using threshold cryptography and distributed key generation, which means the actual key to decrypt the ciphertext or to sign the beacons is never in memory on any computers. But if you are able to compromise the threshold amount of nodes, you can get the actual secret key of the network and you would be able to decrypt all future ciphertext. That's one of the problems with time lock encryption. You can naturally recall a ciphertext once you've released it and you cannot do key rotation. So it's a bit difficult. We do key rotation for each node. So it means if you compromise one node today on another one in, I don't know, next week on another one in three weeks and so on. At some point we will do a key refreshing which will change the shares of every members of the league. And so if you didn't compromise enough nodes yet, you wouldn't be able to use what you've got for anything. So but yeah, the key itself, the actual secret key of the network is never properly rotated because it's a threshold network and we only do refreshing of each node's shares. Thanks. Yes? So the question is about the quantum resistance of the whole thing. The problem with quantum resistance is that you need a scheme that's not relying on something that we know is broken by quantum computers. And BLS is relying on something that we know is broken by quantum computers. So you would need to use another signature scheme to sign the different becomes that is not relying on any assumptions that's already broken by quantum computers. And currently there is no properly threshold signature scheme that is quantum resistant. So no luck for now. But it's something we're looking into maybe at some point in the future if there is such a scheme to create a new network running on a quantum resistant scheme. Yeah. I think we're out of time. So if you want to talk to me, I'll be in the corridor also. You can also reach out on Twitter. Thanks.