 Thank you very much for the introduction. So this talk is on verifiable delay functions. What is a verifiable delay function, or VDF for short? Well, first, it is a function from some domain x to some range y, meaning that for any input x and x, there's a unique output y and y. And furthermore, evaluating this function incurs a delay, meaning that it can be evaluated in time t, but cannot be evaluated in time 1 minus epsilon t for some small epsilon on even a parallel machine. In other words, it requires sequential work to evaluate, or takes a long time in terms of wall clock time, not just in terms of overall time complexity. And finally, it is verifiable, meaning that anyone who computes the function can also at the same time output a short proof of correctness that anyone else can verify efficiently. A bit more formally, we'll say that a VDF involves three procedures, a setup procedure, an eval procedure, and a verify procedure. And the setup generates the public parameters from a security parameter and a delay parameter t, which determines the sequential work needed by the eval algorithm. Eval evaluates the underlying function and outputs the result along with perhaps a proof, although some constructions may not require a proof. And verify verifies that a given input and output and proof are consistent. Importantly, the eval procedure should run in parallel time t with polylog t processors. And the verify procedure, in contrast, should have time complexity, overall time complexity, at most polylog of t. So in terms of the security properties, soundness captures the uniqueness, or the fact that eval implements a function. In other words, for any given input x, there is a unique output y for which an adversary can produce a correct proof that will be accepted by the verify algorithm. Sigma sequentiality then captures the delay property, where sigma of t is some function of t, which is strictly less than t, say 1 minus epsilon t for some small constant epsilon. And we'll say that if a is a PRAM algorithm that runs in time less than sigma of t, then it cannot compute the correct output of eval, y, on a given input x, with probability greater than negligible in lambda. So this is stated informally now, and in a few slides we'll introduce this a bit more formally through a game. Now, VDFs are very, very related to crypto primitives that you may be familiar with. The first is time lock puzzles dating back to 1996, which also involve a puzzle which requires sequential time to compute. The main difference are that time lock puzzles involve a trapdoor or a secret key setup per puzzle, and therefore they're not publicly verifiable in the same sense. But more recently, proofs of sequential work solve the public verifiability problem, and there's several recent constructions of proofs of sequential work which are publicly verifiable, meaning they do not require any trapdoor or secret key setup per puzzle. But on the other hand, these constructions do not have unique outputs. So they are not functions. To summarize, a VDF minus any one of these properties follows from either easy or follows from previously known constructions. If it's not verifiable, then chaining any one-way function would give you a sequential function. If there's no delay required, and it's just required to have some moderate hardness to compute with efficient verification, then there's many, many examples. For example, discrete log on a small domain. And if it's not a function, then proofs of sequential work suffice. But let me mention one other classical puzzle from the literature that comes very close to a VDF. And this is the problem of modular square roots mod a prime. The puzzle is based on the assumption that there's no log t algorithm that can compute x to the power t mod a prime faster than doing log t sequential multiplications via the repeated squaring algorithm. The way that you would use this to create a VDF-like puzzle is, well, the setup procedure would pick the prime, p. And in this case, we would set p equal to 3 mod 4 so that the eval procedure can compute the square root of a given point x by raising x to the p plus 1 over 4. And that requires log of p sequential squareings. But the verified procedure just requires one squaring to verify the result. So this comes very close to a VDF. The reason that we would call it only a proto VDF is because it doesn't quite have enough asymmetry. Let's see why. So let's say that the time complexity of multiplication mod p is m of p, which is at least super linear in log p. The eval time requires log of p times m of p work. And the verify work is just m of p. So there is a log p gap between the verify and evaluation procedure, but the verify time is not polylogarhythmic in the eval time. And if you want to increase the delay by increasing p, not only introduces more parallelism in the multiplication, it also blows up the size of the proof. So let's now come back to the security properties and define more formally what sigma sequentiality is in terms of a security game. So the sequentiality game will generate a public parameters from the setup algorithm. These will be given to a pre-processing adversary who's allowed to pre-process on the parameters and produce some advice L. This is then given to an online adversary who is given a challenge, a random challenge, x sampled from the domain, and is required to try to output its guess for what the value y is. And we say that the adversary wins the game if this online adversary is able to output the correct y as defined by the eval procedure for the given random challenge x. And we'll say that a VDF is p sigma sequential if there's no adversary consisting of a zero and a one where a zero runs in polynomial time in lambda and a one, the online adversary has pram running time sigma of t on p of t processors. And if no such adversary can win the game with probability greater than negligible in lambda, we'll say that the VDF is p sigma sequential. So before going into describing some constructions of VDFs, I first want to highlight several applications. There are many applications ranging from randomness beacons, multi-party randomness, time stamping, proof of space, and proof of replication or permissionless consensus protocols. But in this talk I'm going to focus on the application to randomness beacons and the related application of multi-party randomness. So what is a randomness beacon? It is a term that was coined in 1983 by Michael Rabin as an ideal service that regularly publishes random values which no other party can predict or manipulate. There are of course many uses for random beacons, most notably running lotteries without a trusted operator or Byzantine agreement protocols. And what is the problem with randomness beacons, the way that they're run usually in the real world today? Well, they're done through these public displays of randomness. So you'll have some balls running around in a machine and then the machine spits out some balls which determines the winning tickets of the lottery. The problem is that this is easily corruptible. In fact, you can watch this YouTube video online where you see that only three balls have come out of the machine, but magically the five winning numbers have already been reported. So clearly something fishy is going on. Another idea, perhaps better, is to use some publicly occurring entropy source. Stock prices have been suggested under the assumption that their unpredictable and an adversary cannot fix stock prices, at least not over a long period of time. But we do know that stock prices are manipulatable to some degree in the short term due to high frequency trading. So let's see how this is problematic if you're going to construct a randomness beacon naively based on stock prices. Let's say that we take the closing prices of 100 stocks on the New York Stock Exchange and hash the prices and extract from it the output of our beacon. The problem is that an adversary, a high frequency trader, just after the prices settle a few minutes before closing can execute say 20 last minute trades in order to influence the seed. The problem is that the attacker can predict the outcome of each of these trades and therefore choose among its actions the most favorable for the outcome. So it can therefore bias the result. And how would a verifiable delay function help with this where the solution is simply to slow things down? So after extracting the value from the hash of the prices we would then run it through say a one hour long VDF which after an hour spits out the final value of the beacon. Now the attacker cannot tell which trades to execute just before the market closes in order to bias the result in a particular way because it cannot predict what the output of the VDF will be on this value presuming that there is enough entropy that it cannot manipulate in the input. The same idea can be used for resurrecting the most simple approach to multi-party randomness generation. And what is this approach? Well just have all the parties submit random values to a public bulletin board. And then determine the output of the common randomness generation as the hash of all these values. Now clearly this does not work because the last party to submit has complete control over the seed. Perhaps they cannot fix it to a particular value based on the security of the hash function but they can certainly choose from polynomially many options and bias the result. So with a VDF we would just slow things down so that instead of choosing the hash to be the output the common random output we would then apply a VDF to add a delay and then hash again to determine the output of the beacon. Or in this case the multi-party randomness generation. So this way Zoe who is the last submit can choose among many different values but she can't figure out how they will actually affect the outcome so she cannot bias the result. Now let me talk about some constructions. I will spend the most time talking about the construction we have in our work which at a very high level involves a change permutation which is fast and slow in one direction but fast in the other and then we'll use a snark or a stark to amplify this into a verifiable delay function and I will explain more details on that but I will also briefly mention some follow up work that came out very recently which takes a different approach of essentially constructing a specialized snark or compact proof of correctness for exponentiation in a group of unknown order. So let's recall the hash chain which gives you a sequential function but not one that naturally admits a way of proving correctness of the computation but we can just combine this generically with verifiable computation which gives you a way of producing a proof that a computation was done correctly. What is the problem with just say using a snark or a stark for this? Well the proof generation is much slower than the hash chain itself. Certainly without massive parallelism to speed up the proof generation. And in terms of security this means that the adversary who is not required to put out the proof but only to compute the output y will be able to compute much faster than the honest eval algorithm which also has to put out this proof. So ideally we should have a construction which can derive the proof and the result roughly at the same time. So the next idea would be to use incrementally verifiable computation which is similar to verifiable computation but allows you to produce the proof as you're doing a computation and update it along the way. So it's simply to compute several steps of the computation then output a proof of correctness of those then compute some more steps of the computation and output an updated proof which both verifies the previous proof and the next few steps of the computation and so on and so forth until you've derived the final result and the final proof which accumulates all the intermediary proofs. So in theory this too would give you this would give you a sigma sequential VDF with a very good sigma of t say one minus epsilon t for small epsilon but we would really like to make this practical and relying on generic IVC would not necessarily give you that. So the first idea is just the natural idea that's used in all snark applications. Well let's first replace the hash function with something snark friendly so something which should have low multiplicative complexity over a finite field but the second idea is to replace the hash function with a permutation row which is slow to compute in the forward direction which is the incurring the delay but much faster or more specifically has low multiplicative complexity over a finite field for the reverse direction. Then the snark or the stark proof can be generated on the reverse direction instead of the forward direction. So what have we gained by doing this? Well the hash function or the now row the permutation we're using can be weaker than a VDF but still a proto VDF or something that still has some asymmetry such as the modular square roots puzzle that I mentioned before which is not quite a VDF if it were we could just use that on its own but it does have asymmetry and therefore can optimize this generic IVC approach. To be concrete how would this compare to using shot to 56? Well if we had a shot to 56 chain then well let me tell you how the square roots chain would work so the square roots chain would be over Fp squared and we do need to add a nonlinear permutation between each of the square roots for reasons that I won't go into otherwise there is a shortcut. So we'll do square roots over Fp squared so that we can add a simple nonlinear permutation such as a coordinate swap that will not incur additional complexity in the arithmetic circuit. So the shot to 56 chain would have 27,904 gates per hash function which would need to go into the snark whereas evaluating the reverse of the square root chain would just involve four gates per hash function. Naturally this motivates the question of whether we can come up with even better asymmetric permutations which are beyond the classical square roots or equivalently cube roots puzzles. So more generally you can think of this as an example of injective polynomial inversion where the slow direction is to find the inverse of a point and the fast is to evaluate the polynomial in the forward direction. Even more generally we could consider injective rational maps on algebraic sets. Let's focus in on the permutation polynomial which is one example of this. So it's just a polynomial on a finite field which permutes the field and because it's a permutation inverting or is equivalent to finding a root say of F of X minus C and this can be done using computing a polynomial GCD which we know how to do using the Euclidean GCD algorithm which involves desecrential steps where in each step there are D parallel arithmetic operations. I should mention that there is also an NC algorithm or a log square depth algorithm for doing this but it requires a large amount of parallelism so order of D to the 3.85 parallel processors. So if we were to use this for a proto VDF then the eval procedure would require D parallelism in order to prevent the adversary from gaining a very large speed up over the honest eval procedure and at the same time we would need to set D so that it's large enough so that D to the 2.85 parallel processors are infeasible for an adversary so this doesn't really give you a theoretical VDF but it does give you some kind of weaker form or proto VDF and what I didn't say that we actually have one of these that works but the ideal holy grail of permutation polynomials would be one which has some tunably large degree independent of the field size remember the field size needs to be exponential in size so the degree can't be the size of the field otherwise there's brute force attacks it should be fast to evaluate so for example a sparse polynomial which can be evaluated in log D complexity and there's no faster way to invert it other than computing the polynomial GCD which is inherently sequential in the degree so if we were to do this then this would give you a puzzle which for which eval and verify have an exponential gap between them it turns out that permutation polynomials are an entire field of mathematics and there's many examples of different kinds of permutation polynomials from the very simple polynomial x cubed which clearly doesn't work for this to more complex polynomials, sparse polynomials of very large degree even degree which is independent of the field size but the problem is that well nearly all of these we can rule out is unsuitable for VDS either because there is another way to invert them which is not through GCD which is efficient such as if it's a linear map or in this other example here the degree is the size of the field and that is unsuitable because the field size needs to be exponentially large but there is this last standing class of permutation polynomials that we haven't been able to attack yet so it's a very interesting open question whether this indeed could be a suitable permutation polynomial for a VDF application so to summarize when you take either an underlying permutation polynomial or the square roots and combine it with the snark or stark IBC approach you get a VDF which has verification complexity order of log t snarks each snark has constant complexity to verify so order of log t complexity overall it has proof size also order of log t it is secure under the assumption that the underlying chain is sequential whether this is a square root chain or an ideal permutation polynomial chain and also based on the security of the snarks or the starks does it involve trusted setup well none if you use starks but even if we were to use snarks and have a trusted setup note that the trusted setup is really does not impact the sequentiality security the sequentiality is not broken if the trusted setup is broken it's only a way of optimizing the efficiency of verification and moreover if we were to use starks then possibly this would be quantum resistant but it's not a simple construction and that's why I will mention some newer VDFs from Pieczak and Wozolowski that came out very recently which essentially constructed a specialized proof of correctness for exponentiation in a group of unknown size so the VDF eval algorithm is to on a given challenge hash the challenge and then raise it to the two to the t which if the group's order is unknown then it requires t sequential squareings we believe and then the cool thing is that they are able to produce a proof of correct exponentiation efficiently that can be verified efficiently as well so that's the end of my talk our paper is on eprint and we also have a survey of the various VDF constructions from 2018 also on eprint thank you very much