 So today we're going to talk about basically sequential time and the trustworthy notion of it. Right. I'm Brian. Full disclosure, I just left protocol labs, so I'm representing protocol labs and this broke again. All right. I don't know what's up. So it's like, I'm not at protocol labs, don't ask me protocol labs questions. This is Simon Pevers. He's a super national. They're responsible for basically all the goodness, right, like the actual implementation of what we're going to talk about. Hey, guys. I know it's pretty loud. Do you mind talking outside? Thanks. All right. Cool. So this is Simon. I'm Brian. Let's get to it. All right. So that, that talk title is pretty vague. What is trustworthy sequential time on a lab is this? And why do we care? And like, should we even make it real or whatever? I'm a little distracted because there's cords on the floor. So if I trip on it, I apologize. Um, anyway, so, so I guess the goal is fundamentally we want some sort of verifiable and trustful time, right? Like I want to know that when I talk to like Derek, right, and I say, Hey, what time do you have? And you say 4pm. And I'd say bullshit. I don't believe you proved to me that it's 4pm and you go and you give me some proof or something that that would be cool. Not particularly interesting. If I could prevent him from doing something until some sort of 4pm and then he could prove to me that, Oh, it's 4pm and I can now do something that would be much more useful. And we'll get into like what that means some applications sort of things. So like, why do we want this sort of thing? Well, I mean, come on, uh, well, I mean, synchronization, obviously be super awesome. If we could all agree on some sort of definition of time, like we all have synchronous clocks and do some interesting things like 4pm or something, right? That would be super cool. If we could query some service, I call it the Oracle of time, time by like Oracle Delphi. No, that would be really cool where like, you can't really predict whatever's coming on here. You just know that at some time you get some information out, you can verify that happened at time, right? And time, sorry, guys, uh, I don't know how to make that better. It's been enough for everybody. Okay. All right. Well, it's more on than off. Okay. And fundamentally, like a presumed that you could do this, there's some magical box or something. How do we actually do this? Um, well, there's, there's a bit of math and there's a bit of hardware and it turns out you can't, you can't really solve this problem without both of them. Uh, so, so sort of moving on, uh, what is the agenda for this talk? And it's basically me talking to you about mathy things. And then after I'm done sort of regaling you with math, Simon actually does real stuff. And I think that's sort of the juicy part of the talk. So right. So I mentioned like kind of what this trustworthy notion of time is, uh, cool story bro. Like, why do I care? Uh, well, it turns out there's some really cool things you could do if you had this like magic box device, something, right? One of them is, uh, well, obviously self release, health revealing commitment. So suppose that I say like, I pinky swear to reveal some secret to you in like four days, right? And maybe that's like a wallet address or wallet password or something, right? And like, I pinky swear to like give you this money stored in some wallet or something, and you say, no, that's bullshit, right? Like, I don't believe that you're actually going to give this to me, right? And so what you can do is if you had this like magical device that gave you like this verifiable time component, maybe some randomness that corresponded to some sort of unlocking key, then you could say, like, here is this problem that if you solve this problem, I know it's probably going to take me four days, then you will get this key that is necessary to unlock this thing that you can do something useful with, right? So self revealing commitments, uh, random beacons are really important on this line. So I want to know every 30 seconds or so, like the value of this random beacon, but I don't want people to be able to fast forward because if they fast forward, make it know the random beacon information and do something nefarious. Turns out that's super important for Ethereum and it's super important for Filecoin. Uh, you could also do something called verifiable lotteries. Uh, that is also super important to Filecoin. There's something we call secret single leader election or secret single leader election. There's a hyphen in there somewhere. It's contentious. Uh, anyway, so it turns out, like, you know, in most of these consensus mechanisms, you have a leader election, right? It's effectively a lottery. It'd be nice if you could bound, uh, basically have like a delay on when those leaders are elected and you could know, like one, it's verifiably random and two, it could only happen ever so often, right? So you don't want people to like be a leader first or no, they're not going to be a leader, you know, within some sort of epsilon of time. That'd be really cool. Uh, so what does this apply to for, for blockchains? I mean, protocol labs, uh, cares about this. Ethereum has funded a lot of the research, uh, with protocol labs. So, uh, for protocol labs, it's proof of space time. Like we need this bounded delay function and there's no way around it. Uh, we actually use it in, uh, expected consensus. If you're familiar with that, if not, look it up, it's pretty cool. Uh, so verifiable bounded delay, really cool stuff. Um, if you go back into like the back in the day theory for consensus mechanisms, you'll find that most BFT systems are bounded delay. That's like an assumption they make. So if you can enforce bounded delay, you get a lot of security parameters, like security guarantees that you don't need to make further assumptions on. So why do we actually care? Like, is there an example where we'd actually ever use this in real life? Uh, it turns out, yeah, um, this is something that came in. I think it was like Serbia or something like a few years ago where you can see that, uh, the balls are, you know, it's like the lottery thing, the balls come out and you like know the number, like pick it up and they read it and all the balls are filled, but none of the balls are. Few of the balls have been drawn. So you'll see like whoever ran this lottery knew the answer before. They actually did this sort of random pullout thing. So something unusual happened. That's really terrible. Um, I stole the side from my colleague Dan Bonnet. Uh, I thought it was pretty cool. So this would be really terrible. We could prevent that with these magical like box function things. So, uh, I said some sort of heuristics, like it would be nice to haves, uh, but maybe we could formalize this a little bit more and say, what do we actually want to guarantee? Like what does security mean? Like, cool. We have this verifiable time thing. What does it absolutely need to do? And Mickey, maybe can we like formalize some of this process? Right. And so this work was done a few years ago. I actually know it's about a year ago, year and a half, um, by some of my colleagues, Ben Fish, Dan Bonnet, and some others. And they decided to formalize a lot of this and write something called verifiable random or verifiable delay functions. And they wrote a paper on it. It's pretty awesome. Recommend reading it. Uh, but anyway, so there's always going to be a setup component, right? It's, this is a mathy sort of thing. It's really helpful for security models, but basically you need to set up your function. You need to set up your system. You need to take in some set of public parameters, whatever they are, public keys, um, whatever voice your boat, whatever the function needs. And you need some security parameters. Security parameters are often just labels, Lambda. Um, they're kind of fuzzy. You ever see Lambda in literature? That's usually what it means. Unless his computer science is something different, but like this is security parameters, you need an evaluation function. It needs to do something and it needs to output a proof. Right. That's basically all this means to take in some public parameters, take in a seed of some kind, do something and then provide a proof that you did something. You did it correctly. And the last bit is verify, right? And that's given the proof, given the something, given the thing you claimed you did, does it work? True or false? Right. So it seems like that's not important, but it turns out it's super duper important for any type of proving in the system. Uh, so what do we actually want from this magical function? Uh, we need uniqueness, right? So if everybody has the same magic box and in 30 seconds, uh, we, we get this like verifiable delay and you get like time out of it or you get like a seed out of it, or you get some random bits or something. We should all have the same random bits, right? Like I can't have like somebody having a new set of random bits. It's actually true, right? And it's like, Oh, I have my magic box that isn't unique anymore. So I have a different value. And if I can get more than one value, right, then maybe I could cheat. It turns out that's a huge security problem. So we require you need this. The other thing is called epsilon sequentiality. This is a formal word, but it basically means there's no way to do this in parallel, right? If you had a million of these boxes running, it doesn't give you an advantage, right? Like you just, you just spend a lot of money buying million boxes. So one is plenty, right? So the next thing is like, okay, well, cool. We have these sort of mathy theoretical things. Like what does that mean in real life? Like can such a magical box exist? And it turns out kind of, right? And here's some like assumptions that you need to make. And that's your ability to do this sort of magic boxy thing is usually based on some thing you can't touch that nobody can touch in the world, right? So something in space or something. Or hardware assumptions, right? So your ability to compute some function, right? And, and that's actually the way we go. We'll talk about this in maybe a next slide or so. I'm not going to cover this a max thing. It's not important until we get to the hardware stuff. So you might think surely somebody has done this in the past. They have there's some centralized approaches. I mean, like assigned time protocols, they work. They're great in blockchain. That's not great though, because you have usually one entity that controls all the time. That's really not great for us. So we tend to avoid that. So like most of these things that are centralized don't quite meet the I'll call it product requirements, if you will. Like they don't really meet sort of what we're looking for. There are some distributed approaches that work really well. There's multi multi-party computational techniques that you can use. They're really expensive, like unbelievably computationally expensive. And if they're not computationally expensive, then they're sort of communication expensive. So it's coordination is really nasty. So we try to avoid this. There's some things that use like around Robin style computation. It's more or less a subset of multi-party computation. It's expensive. This cosmic events thing is kind of interesting. Usually whenever these functions come up, people are like, oh, just look in a space and you see like some event that has happened. And that's fantastic. Except, you know, like, if something happens in space, not everybody can see it. So like somebody gets an advantage that kind of sucks. So so we don't do that. So what do we actually do? And that is verifiable delay functions. That's the formulas. That's the term you can look this one up and you'll find all this stuff. What are VDFs? Right. So VDFs are this magic box. Right. This is sort of how we formalize the magic box that we're going to talk about. Right. And and like there was a paper with with Dan and Ben in like 2018 where they formalized all this. And it basically exactly what I told you. That's there's a sequential stepping thing. It computes a thing. We believe it takes so long to do it. And the output of that, that proof component is really fast to verify. And that's really, really important. I wrote PBT probabilistic polynomial time. It's a formalism. It's important, but you can read it as like it's fast. So VDF notation is the same as what I showed you before. Probably not important for here. Any rates. One of the things that is really important for VDFs is this epsilon sequentiality. You can't paralyze it. It's important, whatever. Anyway, so so how do we actually build this thing? And that's probably what most people are interested. Cool magic box exists. It does cool stuff. How do we build it? It turns out that a runner vest some time ago did something similar without the efficiency component. And he was like, hey, um, I have RSA stuff, I have an RSA number. What if I just raise things to a gigantic power, right? Like, I don't know, lots of zeros and stuff. So like X to the really gigantic thing. How long would it take to like do that? It turns out it's really expensive. And so like you can do some clever approaches to like bound how expensive it would be, this sort of approach where you take X to the two T. The two T is basically a carryover from Square. Or is it like a squaring trick where you can divide by two and you can like square things and square and square and square. And it sort of works out and makes things a little faster. So two T is the number of things. A T is the number of exponentiations you're going to make. Two T is like the exponent, right? And so he found that's really expensive to compute. It would be really cool if I hit something in X and then in like really far away, somebody was able to compute this and like reveal what X was. And so he's like, see, it's like it's verifiable because like at some point you'll be able to finish this computation. You'll be able to figure out what X is and you'll be able to tell me and like I'll be really happy and we'll have a big ceremony and Simon will get invited and he'll get awards and stuff. And I'll be really happy. That did happen, by the way. So it turns out he did that. It was a really cool problem is that two T is really huge in order to be like useful. And so like we wouldn't want people to waste like 35 years computing a thing. Right. So we needed something much more efficient. Turns out we have it. There is these two mathematicians. I think I think Ben Wasilowski is French and I think Pear Sack is Polish is not important. But there are the Austrian really fantastically smart folks. And they found sort of a cheat. Right. And it was what we'll just do it. Ron did one of the G to the G is just a group element. But like this X thing to the two T same as everybody else. We know that's hard and we'll figure out how to make the proof while we do all this like two T business. So maybe while we're computing this like G to the two T thing, we can do something on the side that like when we're done for a little bit extra work, we can prove to somebody very efficiently that yes, we actually did this work. It turns out you can do that. Right. And here's effectively the algorithm. You compute the X to the two T. So this notation is a little walkie. But basically what it's saying is you have a hash function. This hash function isn't like a regular hash function. It basically says take some input and map it to inside this group. So for RSA, it's like RSA 2K, whatever your module says, hash it to less than that modulus within your group. So this is fancy your notation. That's basically what it's saying. So shot two would work here. Asterix don't there's some issues, but like you need a hash function that's the group not important. So you take that. Then you just do the two T, right? You get an answer takes forever, right? But on the side, what you're actually doing here is you sort of break down two T into this Q times L plus R thing. It looks really weird, right? L comes from the person who is trying to get you to do work. So they say, Hey, Simon, I want you to do work. I want you to compute this like G to the two T thing. And also, by the way, here's an L. L is a prime. It's a random prime from the set of primes that you get to choose how big they are. Once that's done, you compute this sort of two T QL business, right? And it turns out if you do some fancy algebraic stuff that this bottom thing turns out to be correct, right? And so this pie, which is this G to the Q thing, Q is pretty small or it can be pretty small. So it's pretty easy for you to compute. Whenever you as a verifier, like I've said, Simon do work. He gives me something back. I don't want to do a lot of work. So I want to make sure that like L is small and R is small. It turns out if you do this, it is small, it's like small enough to do. And you can verify this super duper fast. And so like you get this like verification thing. It's super awesome. So the next question you might ask, like, cool, that's nice, I guess. Like, is there other ways of doing this? Are there other ways of doing this? It turns out there are, but this is so simple. It's awesome. And you'll see why it's awesome when sort of Simon goes into a little bit more. But basically, it is so far in the research, and this may have changed in the last month or so. But there's two main avenues, RSA groups, just basically what I showed you before. And something called classroom groups, they're pretty complicated. I'm not going to talk about them at all. Like they, the moral of the story is you find a prime that has their properties and you build this quadratic number field and things happen. The advantage from for having a class group is there's no trusted setup. So if you know anything, maybe we don't, about PDFs with this RSA stuff, you may have noticed like, hey, if there's a modulus, right? And I happen to know the primes I can cheat. So this are like this G to the two T thing should be a lot easier to compute. And you'd be right. Like open SSL whenever you actually go to like encrypt things with RSA, it does this Chinese remainder theorem cheat. And it's like, well, I know the primes. So raising something to some gigantic power is much easier because I can sort of reduce the the state space of these multiplications. So like you need some way of doing multi-quart computation so everybody gets the same modulus without knowing the primes. That's really hard. I there's some work to show what we can do that. I think it's now at 10 minutes or something from Lero for like a thousand people. So it's probably reasonable with class groups. You don't need to do that. So it's really attractive. So some groups like Chia I think are doing class group based VDS is pretty cool. The issue with class groups is not really sure if they're sequential. Like we think they are, but like not really sure that they're both really well studied, though, like they're pretty solid. So moving on, revisiting these applications, random beacons would be really cool. When when I said trustworthy time, maybe this like backwards thinking is sort of like I have a notion of trustworthy time because I know something takes an amount of time to do. Not that I'm getting like four o'clock and everybody has the same four o'clock. That is also a side effect. One of the things that tends to be more useful is having this like I want you to do something and I wanted to take time. I want a bounded delay that you can verify and then it has some hardware assumptions that we all agree are good and then something good happens. So with random beacons, that's exactly what happens. We do this VDF thing. We have some hardware or maybe you could do it in software. It just won't help you. And at the very end, you get a random number that took you so many minutes to do and you can verify that one. It took you so many minutes to do because they're making some hardware assumptions. You can prove that you did it. So like, hey, I did this and I have randomness, right? And so we can all have the same randomness. We can all have the same values. So if we were doing something off chain and want to bring that on chain, we can now do that and show like, hey, no funny business happening here without needing like Starks or something like that, which don't really solve this problem. For Filecoin, it proves to space time. This like is gigantic for us. Basically, proofs to space time are I need to prove that I have your data on my machine and I want to continue to prove I still have your data on your machine. And it turns out that if you don't have bound to delay, you can do some pretty nasty cheats. If you do have bound to delay, then it's like problem is super awesomely easy. And well, asterix, it's not easy, but like it is much easier to quantify and like secure and these sorts of things. So really important for Filecoin, really important for Ethereum because they're using random beacons. And I think the next, you know, if it's Casper or something or one of the protocols like it relies on random beacons that if we can get VDFs to work well, this just makes it so much easier. So moving on, you know, kind of what's next, right? Like VDFs are new. Like this just started in, I think, like May or June of 2018. And so we've gone from like, this is a cool theoretical thing to like we have implementations of this, like and we're well on our way to like ASIC design, which is pretty cool. So there's a lot more research. There's a lot more theory stuff. I know I kind of glossed over a lot of this for the sake of time. The questions oftentimes we get asked quite a bit. And that's why this is on the slide. Are they quantum resistant? No, class groups, you can kind of make quantum resistant, but like not really, you just change some parameters and you can do that. There's probably quantum resistive VDFs. Right now is just research stuff and it's a ways away. So I think quantum stuff is a ways away. So sort of like moving ahead of that, like, can you make this magic box? That's basically what this gets down to. Cool, there exists. We can do this. I believe you make the box, right? And so the next question is, can we actually make the box? And this is sort of the rubber meets the road bit. And that's fundamentally what Simon has sort of done, right? And it's like, let's make this real. Like, let's really have hardware. Let's solve hard problems. Can I actually show you that this is advantageous? Like versus CPU, do you need hardware? Do you can you make these hardware assumptions and actually get away with it and people don't like screw you over or something? It turns out, yes, you can. And then sort of this is where Simon's part of the talk is going to pick up. So coming up. You just like that is not working. OK, so yeah, I'll talk about some of the work we've done to date. So what what my company does is look at we focus primarily on taking various algorithms, computationally expensive algorithms and speeding them up either through software optimizations, FPGAs or ASICs. And so as Brian said, this, the VDF, because it has this property of you need to know how to have a sort of maximum or minimum time bound and how fast you can compute it for it to actually be secure. It implies right away that you're going to need hardware. So you either need it basically means you need ASICs because the CPU is going to be much slower than an FPGA, which will be much slower than an ASIC. And so we started basically in February looking at well, how would we actually build this into hardware? And starting in February, I mean, we looked at some of the software implementations. GMP is a pretty common math library and it's pretty well optimized. And if you take a sort of high in CPU today to do one modular squaring of a 2k RSA number takes about 1200 nanoseconds, which is pretty good. And it's actually a lot better than the 35 year timeline to compete this LCS 35 puzzle that Ron Revest had estimated 20 years ago. Basically with the CPU today, you could do it in about three and a half years instead of 35. And I'll talk a little bit about how that happens. And then so we started looking at if you're going to go to hardware, what does the component need? We basically need a low latency modular squaring unit. And you might think that hardware multipliers are well studied there in every CPU in the world. And that's true, but they basically focus today on a balance of throughput and power optimization with latency as well. It's a balanced kind of multiplier because it's going to be used for general purpose computation. What we need here is a low latency multiplier. That's the only thing we care about really, as long as we can put it on a chip and the chip doesn't melt or catch fire, nothing else matters. So we started looking at different algorithms for that, working with my colleague, Erdyn Chasturk at Sabanc University in Turkey. We came up with a super low latency implementation and put it on an FPGA. So it took about two months to kind of crank through that. And that runs then at about 67 nanoseconds. So the comparison with the CPU, you can see it's much faster, 20 times faster. And so we got that going and we noticed sort of in February, March or something, we learned about the Celsius 35 puzzle and decided that, hey, this is something we could actually go tackle. It was supposed to take 35 years. It happens to use the same function we're working on. Why not use that as a test point for our hardware? You know, you kind of need to break it early and, you know, drive some excitement in VDS. And so in May of this year, right, we actually solved that puzzle. It takes two months to compute on the FPGA instead of 35 years. And it was great. And in fact, we went to MIT at CSAIL, there was a ceremony with Ron Rivest, the RNRSA on the far, your right. And we had a good time and this is the whole crew. So we had one of the neat things about this project actually is it involves multiple blockchains collaborating. So we have Filecoin and Ethereum and now Interchain Foundation is working on as well and MIT, right? So that was fun. So how do we get to just two months? And so this chart, I'm going to briefly sort of walk you through what are the things that happened. So when Ron did his calculation for 35 years, he figured out he estimated that it would take 79 trillion squareings, it would take 39 years. And so that was the T, the large number that Brian talked about. And he just assumed basically Moore's law scaling with frequency going up. But as you've noticed in the past 10 years, frequency hasn't gone up. You know, we don't have any 20 gigahertz processors. So you think it's slow down, but there's a lot that happens in certain areas of certain computations that gets fed up. So for large integer multiplication, the move from 32 bits to 64 bits was big because we do these multiplies with a bunch of smaller multiplies and sort of compose them. So making one that's twice as big actually is twice as fast. So that's huge. That was a big improvement. Maybe it was at 10 years ago. There were some new instructions introduced into the CPUs to make carry chains more efficient for exactly this kind of operation about five years ago called ADCX, NatoX and Molex. And either in both an Intel and AMD processors. And that helped quite a bit. Then you get some algorithmic improvements. You can do a number of things with algorithms. You can use Montgomery representations. You can optimize the software, right? The number of times you get these new instructions, new processors, they can do more issues at once. A lot of times you're in there optimizing the superscaler pipeline to make it as fast as possible. And that speeds it up. And so that's how the CPU kind of goes from 35 years to three and a half years, even though Moore's law scaling didn't quite follow the anticipated trajectory. On the FPGA, we went further. So instead of 64 bit multipliers, we have two 512 bit multipliers. That makes a huge difference. And then we use some other sort of low latency adder and accumulation techniques to really reduce the cycle time. And that's how we got to two months, right? And the FPGA is dedicated to solving this one problem fast. So where are we going in the future, right? So ultimately, we need to get to an ASIC design. So if we take the FPGA, and so how fast is that going to be, right? What does it buy you? And synthesize FPGA design toward an ASIC. That's about a five nanoseconds querying. So you've gone from 1200 nanoseconds to five in that step. If we really optimize the design for an ASIC, we think we can get down to one in the end. On the FPGA, you're limited to the resources that are on the chip, both the DSPs and the routing resources and whatnot. And an ASIC, you can build exactly what you need. And you're kind of you're unlimited as long as you can fit on that chip. And so that's where you get really extract all the parallelism of that one operation and get down to one nanosecond. And the goal is for EDFs ultimately to build this chip, probably 2021, maybe late 2020, something like that, and have silicon available to do this function, to put on Ethereum and Filecoin and a bunch of other blockchains that are interested. The other interesting thing, let me see. I want to mention before we go to the demos as competition, we're running one now for to develop new low latency squaring algorithms. We have one that's pretty fast. How do we know it's the fastest? We don't, right? So there is a prize out there starting with $100,000 for an FPGA competition to build the lowest latency one. And ultimately, the idea is to run an ASIC competition too with the $1 million price. And hopefully that incentivizes all the people who are interested in low latency multipliers to sort of put their designs out there. And this is in the goal of making sure that we understand what that Amex is so that somebody couldn't later build hardware that is 10 or 100 times faster and compromise security of the networks. So I'll do a quick demo here if everything is going to work. OK, so we have this FPGA design running on AWS F1, which is their FPGA as a service platform. So on the left, what I'm going to kick off here, I'll start on the right, which is the software implementation in GMP. And I'm going to do 250 million squaring. So it takes a couple of minutes. That's going to start. And it's going to print out every maybe 10 million iterations, but what the number is, so you'll see it progress. Both of these are going to do the same thing. And on the left, I have the FPGA in AWS F1 doing the same problem. So first, loads the FPGA. And what happens is it has to send the problem down to the FPGA. So there's a little bit of overhead and every iteration is sending the problem and going to get back. But you can see they're doing the same thing. So each line is equivalent. And that's the sort of 50 nanoseconds. For the squaring, it's 50 nanoseconds because this is a 1K RSA number, not the 2K that we did for the Timelock puzzle. So that goes pretty fast. And I can show you here, I have one queued up for somewhere. I guess I didn't queue it up. So we can do a billion. So this will run a billion. This it runs a little bit better on the FPGA. There's a little overhead with every time you send a problem to the FPGA with the communications coming back and forth, which eats into the cost. So if you run a billion, it's amortized a little bit better. You see 50.1 above. And sometimes you might see a number like 50.8. Our algorithm actually is a variable depth algorithm. Mostly it takes eight cycles to compute. Sometimes it's nine and sometimes it's 10, depending on the carries that happen in that round. So we try to optimize. The carries, it turns out in multipliers, are a big cost in terms of the latency because they had to ripple through every bit. So we try to optimize that by watching where the carries are and short-cutting it where we can. So the guy on the left should be running. Yeah, so I can mention there the one billion finished. So and you can see the CPU is still running. So the key is right for this AMAC, right? We want to make sure that that's a predictable amount. I think like in Ethereum, the security parameter for AMAC is adjustable sort of in the protocol so that can be changed. The main thing is you can't build something like 100,000 times faster. And so if you have an ASIC that's pretty highly optimized, you want to make sure that somebody can't spend, say, 100 times more money and get something that is 100 times faster. It's pretty hard to do. So but you do need to be at the sort of leading edge of silicon technology to ensure that. So there the CPU one finished. It took quite a lot longer. 15 seconds versus, you know, two and a half minutes. And that's the story. So once we have the ASIC, if it's running at about a nanosecond, we will be able to solve the LCS, the 35 year time lock puzzle in about one day. So that'll be a goal for next year. So what are applications? It's a security conference. VDFs are great for blockchain. There's a couple of sort of security ones that could be interesting. And I think Brian mentioned it a little bit, but one is sort of an unstoppable disclosure if you have a zero day that you're going to maybe want to release it in a month or X amount of time, you could wrap it in a time lock puzzle and put it out there. And that sort of ensures that one is going to be released. And then it sets the time to so it can't be released sooner. Another is like a denial of service prevention or spam prevention. You can require sort of a VDF or a time lock to be solved along with the proof that you solved it prior to every email. That sort of imposes a little bit of work to be done for the email that the receiver can actually verify and make sure you did the work. And so it's a way to sort of prevent some of these things. I think that's it, except for the competition. So this is our little flyer. If you go to vdfalliance.org, you can find out more about the VDF Alliance. And I think it's a great consortium of blockchains and also commercial partners. We have AWS synopsis and Xilinx involved and it's growing. And you can participate in the contest. And I hope you guys will all do that or find friends to do that. And win $100,000. Any questions? I think we may be out of time, but yeah. There was, yeah. So coincidentally, and it was quite amusing, right? So we were solving the puzzle and projected to solve it in May. And there was a Belgian programmer who had started two years earlier working on the same problem, sort of in secret. And we both contacted MIT with our sort of intended solution within 24 hours of time. Completely random. We had no knowledge of each other, right? So, but he did beat us by a couple of days or a couple of weeks actually. And I think his persistence is remarkable and the fact that he was able to do it. And he moved a couple of times in the process. But anyway, it was a great event. He was there too. He was in the picture, Bernard. And yeah, it was nice. Other questions? All right. Thank you.