 I'll be talking about continuous verifiable delay functions. This is joint work with Cody Freitag, Elon Komorgotsky, and Raphael Pass. I want to start by recalling the notion of a randomness beacon. This was introduced by Robin in the 80s as an ideal service that publishes unpredictable random values at fixed intervals of time. This has many applications to generating randomness in decentralized settings. As such, we really want to realize this ideal functionality in the real world. For example, suppose the randomness beacon is being run by a service provider that we don't necessarily trust. We still need to be able to use it in our applications, so the central question that we're focusing on is how we can minimize trust in the beacon. Various constructions of randomness beacons have been proposed. For example, you can use stock market prices as high-entry resources, but inherently this relies on an external source of data. You can also use these commit and reveal paradigms, but this relies on interaction. Therefore, in this work, we focus on what we view as the most natural and weakest of the assumptions, which is using slow functions to get a randomness beacon. The classical notion of a slow function was introduced by Riva Shamir and Wagner in 96, which is repeated squaring in a group of unknown order. This slow function f takes us and put a group element x and a time bound t and compute x to the 2 to the t mod n, where here n is an RSA modulus. The reason that this is conjectured to be a slow function is because it has the sequential nature. You can view it as a line where starting with x, you just repeatedly square x t times until getting to x to the 2 to the t, and the assumption here is that this doesn't have any shortcuts. There's no way to speed up this computation other than going through every node in this chain. In particular, this is believed to be true even if the adversary uses parallelism. In other words, parallelism doesn't help you compute this. You still have to run for t steps. This is an example of what we call an iteratively sequential function. This is a function f which is composed by iterating some smaller function g. The properties of an iteratively sequential function is that it's easy to compute in time t or in t steps of g, but you can't compute it in time 1 minus epsilon times t. This is true even with many parallel processors. We go back to randomness beacons. Let's see what happens when we use an iteratively sequential function. So we can start with some initial seed x0, and each value of the beacon will just be obtained by applying g to the previous value. So this gives us some good properties. First of all, it's publicly computable. Even if this beacon, this untrusted service, goes offline, we actually don't need it. We can just compute it ourselves because by definition of g, there's no private key or private state or anything like that. In addition, it's unpredictable, which is what we wanted from a randomness beacon because iterations of g can't be sped up. So if you could predict the value of a beacon, maybe sufficiently far in the future, that would give you a way to speed up iterations of g, essentially. The downside though of using this, especially in decentralized applications, is that someone who arrives late will never catch up. In particular, if someone arrives at time t and they see the value of xt, the only way for them to verify it, to know that it's the true value of the beacon, is to run for t steps. But by that time, the beacon will already be at 2t, so they'll always be behind. So really what we want is a beacon that's also verifiable efficiently at every step. So in this work, we introduce continuous verifiable delay functions, which are essentially an iteratively sequential function where every step is publicly and efficiently verifiable. As a randomness beacon, this essentially solves the problem from the previous slide because not only is it publicly computable and unpredictable, but also any state can be verified in time independent of t. I'll also add that this only relies on an initial random seed, so we don't need any external sources here. More formally, a continuous VDF is the following. It has a sample algorithm, which samples this first state, state zero. It has an evaluation algorithm, which is deterministic and allows you to transition from one state to the next. And it has a verification algorithm where for any t, the tth state can be verified. And in particular, this verification is extremely efficient. It only takes time polylog t, and this is for any t. So the three properties that we require from a continuous VDF is completeness, which says that the tth state or the tth iteration of a val should successfully verify for any t. Soundness, which says that the tth state is the unique verifying state. And finally, it should be iteratively sequential, meaning nobody can speed up iterations of a val. And in particular, we require that nobody can compute the tth state in less than one minus epsilon times t steps, where epsilon should be thought of as an arbitrarily small constant. And again, this holds even with poly t parallel processors. I'll also mention that continuous VDFs can rely on public parameters, just as in the case of VDFs. The reason we call this a continuous VDF is essentially because it's a verifiable delay function where every intermediate state is verifiable. A verifiable delay function, in contrast, this was introduced by Bonadol in 2018, and this is also a function that takes t sequential steps to compute. So it also requires a high sequential time, but it only provides verifiability at the end of the computation. In more detail, a standard VDF with difficulty t can be computed in t steps, has fast verification, and it can't be sped up even using parallelism. I just want to emphasize again that the difference between this and a continuous VDF is that in the case of standard VDFs, t is fixed before the start of the computation, whereas for continuous ones, these properties hold at every time step as the computation progresses. No constructions of verifiable delay functions include the construction of Bonadol who introduced them and constructed them based on snargs, as well as the constructions of Pechak and Wozlowski, which are both based on repeated squaring and fiat-shamir-type assumptions. Also looking ahead, our construction will be based on that of Pechak. It's interesting to note that all of these constructions do have an iteratively sequential function underlying them, but they are not themselves iteratively sequential functions unlike continuous VDFs. To highlight the difference between VDFs and continuous VDFs, I want to talk about the following potential application of VDFs. Suppose there's someone that's offering $5 million to do a five-year VDF computation. And Alice comes along, she starts computing VDF.vel. She's planning to do it for five years, but something comes up after three years. Maybe she runs out of money and she can't continue the computation anymore. So she has some state, but she hasn't finished the computation yet. Ideally, what we would like is for her to be able to sell her state to Bob for $3 million for the three years that she did, for Bob to be able to verify the state and then continue the computation. The issue is though, since with a VDF, her state is unverifiable until the end, Bob really can't continue the computation. So the three years of work that she did is basically for nothing. In contrast, if she had used a continuous VDF, Bob would be able to verify her state, complete the work and give us a succinct proof for the whole computation. I wanna emphasize here that when I say succinct proof for the whole computation, I don't mean that Bob will continue from where she left off and then append a proof of the rest to her proof. Here, a continuous VDF really allows this sort of handing off of computation in a way where the proof size isn't gonna grow multiplicatively with the number of handoffs. So in this paper, we construct continuous VDFs and our construction is based on a Fiat-Chemure type assumption and the repeated squaring assumption. First is a computational assumption which says that repeated squaring in an RSA group is iteratively sequential. So iterations of it can't be sped up. This gives us the delay property that we need but it also means that a protocol has a setup phase because we're working in an RSA group. I should also note though that we could similarly rely on class groups and then remove this trusted setup. For our second assumption, let me briefly discuss the Fiat-Chemure heuristic. This is a way to collapse rounds in public coin interactive protocols. For example, in the case of a three-round protocol like the one shown here where the prover sends the message A, the verifier responds with some randomness B and the prover answers with the final message C. The Fiat-Chemure heuristic says that soundness is preserved when the verifier's message is replaced by hash of the transcript so far. So in this case, B would be computed as a hash of A and then the resulting protocol is non-interactive. For the case of constant round proofs, so information theoretically sound protocols, this is sound in the random oracle model and in general, believed to be true. And that's what we rely on. For the case of constant round arguments or computationally sound protocols, counter examples do exist but for the case that we need which is constant round proofs, it's believed to hold. We also show the following applications of continuous VDFs. First, public randomness beacons, the notion that I talked about before. Second, we show that if you view our continuous VDF as a plain VDF, then this gives VDFs from a constant round variant of Fiat-Chemure, whereas previous constructions relied on Fiat-Chemure for logarithmically many rounds or for constant round arguments. We also get outsourceable VDFs which I also talked about before. And lastly, we get this really surprising connection to the hardness of finding Nash equilibria which I'm gonna elaborate on next. So, the celebrated result of Nash from 1951 is that every single game has a Nash equilibrium. However, there's no known polynomial time algorithm to find them in general. This motivated a line of work, in particular in cryptography, for basing the hardness of Nash equilibrium on cryptographic hardness assumptions and we continue this line of work in this paper. We show that a continuous VDF which supports super-polynomally many iterations implies an average case hard Nash equilibrium instance. So, if you instantiate this theorem with our main theorem, this gives average case hardness of Nash equilibrium from Fiat-Chemure for super constant round proofs and also the repeated squaring assumption. Previously, this was known from strong assumptions like indistinguishability obfuscation and more recently from Fiat-Chemure for the n-round sum-check protocol which is somewhat incomparable to ours. Perhaps more surprisingly, we show that the hardness of finding Nash equilibrium requires some sort of sequentiality. In particular, we show that a continuous VDF implies Nash equilibrium instances which can be solved in polynomial time but require a high sequential running time. This is the first kind of evidence for this kind of hardness for Nash equilibrium and in particular, it's previously unknown under any assumption. So, for the remainder of the talk, I'm gonna tell you about our construction of continuous VDFs. So, I'm gonna start by talking about the VDF construction of Piat-Chem from 2018 and in his construction, the prover starts with some input x and some time bound t. It computes x to the two to the t and then proves that it did the computation correctly. So, underlining his VDF is this interactive protocol for repeated square. So, in this protocol, the prover wants to prove that when you take x and you square it t times, you get y. The first thing he does is compute a midpoint of the computation. So, x to the two to the t over two and he sends this midpoint to the verifier. So, now the prover has two statements, each of difficulty t over two. So, what we do is the verifier sends a challenge and the prover uses this challenge to combine his two statements into one, which is also a difficulty t over two. You should think of this as using the challenge r to randomly combine x, u, and y into a new statement where if the original statement was true, the new statement is true, but if the original statement was false, then the new statement will also be false with high probability. So, using this two for one trick, we just recursively prove this new statement, which again is a difficulty t over two. So, we reduce the difficulty by half until we get down to the base case, which can be verified directly. So, and yeah, so when you apply Fiat-Chemera, this is a non-tractive protocol. So, as a BDF, this looks like the following. Starting with x and a time on t, we compute the midpoint u, then we compute the output y. Now, we have the output, but we still need to prove it. So, now we prove this x prime and y prime statement, the combined statement. This can be naturally viewed as a tree, where I'm gonna denote compute nodes, so nodes that require this squaring computation by purple nodes, and I'm gonna denote proof nodes by green nodes. So, the next step in the interactive protocol that we saw in the previous slide is to recurse. So, we take this proof node and we recurse, we do two compute nodes and a proof node, two compute nodes and a proof node, until we get down to the base case where we can verify it directly. So, the reason this isn't already a continuous BDF, is that the intermediate state is unverifiable. Where Alice is standing now, if you look at the work that the prover has done so far, it's computed u and y, but there's no way to verify it until it finishes the proof. And recall that we want every single intermediate state to be verifiable. So, what we do to fix this is instead of proving only the root, like this protocol does, we're just gonna prove every single node in the tree. So, this looks like the following, completing the tree to the full ternary tree. And essentially what we're gonna do is we're gonna iterate over the leaves in such a way that we're computing, improving every single node. So, let me illustrate this to you. So, starting with x and the leftmost leaf, first we're gonna do one square. We're just gonna square x to get x squared. Then we're gonna square that to get x to the fourth. Then we get to a proof node, which is trivial because it's at the base case. But now, at this point, we have the three nodes which actually constitute a Piachak proof for their parent node. So, we can merge these up. Now, we're gonna need this new node later on to merge it up to its parent. So, I'm gonna denote by yellow the nodes that are active. This is essentially gonna be the intermediate state of the VF is gonna consist of these active nodes for which we've computed proofs, but we still need them for proofs of later nodes. So, we continue computing, computing, improving and merging up when we have proofs to merge up. And I wanna note that at any time our state is completely verifiable. If you look at the yellow nodes right now, they're either at the base case, which can be verified trivially, or we've computed proofs for them. So, we continue this process and eventually we compute the output at the root and also a proof of the root. So, what we've done here is we've definitely made it so that our state is verifiable at any point in time. The question is, is it still a delay function? Namely, can computation of our new, hopefully continuous VDF be sped up? So, recall that what we really want here is a tight gap between the honest running time and the adversarial running time. We don't want the adversary to be able to speed up computation of the continuous VDF. If we look at this tree, the adversary essentially only needs to do T squareings to compute the value of the root. But the honest person does all of these leaves, which is much more. In fact, this is a polynomial gap, which is not good for us. We really want a tight gap here. And the problem is that we introduced too many proof nodes into this tree that only the honest guy needs to compute, but the adversary doesn't. So to fix this, we increase the arity of the tree to K plus one for some parameter K. And now we've essentially just decreased the overall fraction of proof nodes. So again, if we look at what the adversary has to compute versus what the honest party has to compute, if we let the height of the tree be H, then the adversary has to compute roughly K to the H leaves. You can think of this as the adversary computing all the leaves in the K area tree, so the tree without any proof nodes, whereas the honest evaluator computes all the leaves in the K plus one area tree. If you work this out, then it turns out that as long as K is super logarithmic in our security parameter, we do get this really tight gap that we want. I should also add that the height of our tree can be constant as long as T is polynomial and K. So in our construction, we actually set this arity parameter K to be our security parameter, and then we can handle any polynomial T number of steps while only relying on constant round fiat-chemere. So that's our construction in a nutshell. Some things I didn't touch on are, for example, how to generalize Petchak's two for one trick into a K for one trick, which we do in this paper and some other more technical details. So see the paper if you're interested. So to sum up, in this work, we constructed a function which is deterministic, iteratively sequential, and verifiable with applications to crypto and to complexity theory. I'll just conclude with some open questions. So I think one main open question in this work is really improving the efficiency of our construction. What we did by adding nodes to this tree from Petchak's proof, it increased our overall state size and verification time from Petchak's original protocol. So it'd be really great to improve this to make our construction more practical. Another open question would be possibly extending this to quantum security or to memory hardness, especially because it could be the case that if we do that, it might give evidence for similar results for Nash equilibria. So with that, I will conclude. Thank you.