 Hello everyone, I am David from Patagonia University in Germany, and today I will speak about the optimal tightness of verifiable random functions. Intuitively, we can think of verifiable random functions or short VRF as a combination of pseudo-random functions and digital signatures. That is, a verifiable random function just like a PRF is a pseudo-random keyed function. However, in contrast to a PRF, a VRF when evaluated additionally produces a non-anteactive proof that allows to publicly verify the correctness of the VRF output with respect to the input and the public key. Due to these properties, VRFs have found several applications, the most prominent one probably being in the consensus mechanisms of several proof-of-stake blockchains like Algorand, Definity or Cardano. Other applications include ensuring privacy in key transparency systems and in preventing zone enumeration attacks in DNSSEC in the proposed NSEC-5 protocol. During this talk, I will first quickly recall the syntax and the concrete properties of VRFs, then I will discuss the tightness of security reductions for VRFs, why it is difficult to construct optimally tight reductions for VRFs and finally how we can address these difficulties and actually construct an optimally tight reduction for VRFs. So let us start with recalling the syntax and properties of VRFs. The tactically, a VRF consists of three algorithms. First, a generation algorithm that produces a pair of secret key and public verification key. Then an evaluation algorithm that produces a VRF output Y for an input X. Just like for a PRF, the algorithm requires a secret key SK and the output Y is supposed to be to the random. However, the evaluation algorithm also produces a non-interactive proof-of-correctness pi. This proof can then be used with the verification algorithm to verify the correctness of Y with respect to the input X and the public verification key. We require from our VRF that it is correct, meaning all honestly generated VRF outputs are accepted by the verification algorithm. We further require unique probability from a VRF, meaning that for every input X and every verification key VK, there is a unique output Y for which the valid proof pi exists. Note that we also require this for maliciously generated verification keys, not only for honestly generated ones. Naturally, we also require pseudo-randomness from a VRF, which we model using the following security experiment between a challenger and an adversary. The challenger first computes a key pair and gives the public verification key to the adversary, which can then make evaluation queries to the challenger. At some point, the adversary has to state a challenge input X star for which it has not queried an evaluation before. The challenger answers the adversary's challenge input with either the honest evaluation or with a uniformly random element from the VRF's range. However, the proof of correctness is not given to the adversary for the challenge input. The adversary then may make further evaluation queries for arbitrary inputs, of course, except for the challenge input. At some point, the adversary has to submit a guess on whether it was given the honest evaluation of the VRF or a random element from the VRF's range for the challenge. As common with such security experiments, we say that the VRF is pseudo-random. If every polynomial time adversary only has a negligible advantage in distinguishing an anonymous evaluation of the VRF from a uniformly random element from the VRF's range. In terms of contribution, this paper revisits the lower bounds on the loss of public key cryptography from Eurocap 16 by Baader et al. and extends them to also apply to VRFs. Furthermore, it also introduces the first construction of a VRF that can be proven secure with this optimal security loss. The construction is based on the VRF by Yamada from Crypto 17. Next, we will look at the tightness of security reductions for VRFs. And first of all, what tightness actually means and why we should care about it. When we speak about tightness, we usually do that in the context of concrete security. That is, we want to choose theoretically sound key sizes such that it takes at least a certain amount of time to break a scheme. Using a reduction from some established hardness assumption, we can choose the key sizes such that any adversary that is faster than what we aimed for would apply an algorithm for the underlying hardness assumption that is faster than a conjectured, fastest possible algorithm for the assumption and in particular, faster than the most efficient known algorithm. However, some reductions incur in significant computational overhead are only carry over a small fraction of the adversary's advantage. This means that the reduction can be of better or worse quality. We quantify this quality with the loss of a reduction. This loss of a reduction is a factor L by which the runtime of the algorithm for the hardness assumption that is implied by the reduction over the algorithm's success probability is larger than the respective refraction for the original adversary. Thus, the loss captures how much less efficient the reduction is in breaking the hardness assumption compared to the efficiency of the adversary in breaking the security of the considered scheme. For the key sizes that we decided on in this way, this means that they can be significantly larger or smaller depending on the quality of the reduction. Thus, a tight reduction allows us to use theoretically sound small key sizes and by that improve the efficiency of our scheme. Before we look into the loss of VRFs, let us first look into the loss of a closely related primitive, unique signatures. That is, signatures where there only exist a unique valid signature for each message. These are closely related to VRFs in the sense that if not only the VRF output y, but also the proof pi of a VRF is unique, then the VRF is also a unique signature scheme. For this information, it then suffices to just consider the pair of VRF output and proof as a signature. Indeed, most VRFs in the standard model have this property. Thus, let us look into lower bounds for the tightness of reductions for unique signatures. For unique signature scheme schemes, we know that informally every reduction from a non-native active hardness assumption to the affordability of the signature scheme has a loss of at least the number of signature queries of the forger, which we will denote as Q. Furthermore, this bound is tight since there are unique signature schemes that can be proven secure with such a loss under non-native active assumptions. The unique signature scheme by Lusoyanskaya from crypto 2002 is such an example. The question now is, is the situation for VRF similar or different, and if it's different, how different is it? And indeed, what we find is surprising. Namely, we find that the current bounds only apply to VRFs with re-randomizer proofs pi and not to all VRFs. More precisely, it only applies to VRFs that have a unique or re-randomizer proof pi. However, we are able to address this in the context of VRFs. That is, we extend the proof by bar data from Eurocrypt 2016 to also apply to VRFs. We won't be able to go into the details of the proof, but I'd like to give the quick overview of this technique and give an intuition on why the extension works. The lower bound is indeed slightly more general than just for VRFs, namely it is for verifiable unpredictable functions, or short, VRFs. These are syntactically identical to VRFs, but for security, we require the adversary to forge an output instead of distinguishing from randomness. Note that this forgery does not contain the proof of correctness, but only the VRF output. Thus, every secure VRF is also a secure VRF, but not the other way around. If we now consider a security reduction, this usually looks as follows. We have a reduction that receives an instance of the hardness assumption, simulates security experiment for the adversary, and in the end uses the adversary's solution to solve the instance of the hardness assumption. When we want to prove lower bounds on the tightness of a reduction, we usually do that with the meta reduction technique by Coran. That is, we build a meta reduction around the reduction that also solves the same hardness assumption. However, instead of using an adversary, the meta reduction simulates an adversary for the reduction. Therefore, in this proof, it is actually the reduction that solves the instance of the hardness assumption, not some adversary. This way, we can show that any polynomial time reduction that has a loss of less than Q would be able to break the hardness assumption without the help of an adversary. In the meta reduction by Bader et al., the re-randomizability or uniqueness of the signature scheme is used by the meta reduction when it produces the forged signature of the simulated adversary for the reduction. However, for VUF, this forgery only contains the VUF output and not the proof of correctness. Thus, the proof goes through even if the proof of correctness is not re-randomizable because the meta reduction never needs to give it to the reduction. Now that we have established the lower bound on the loss of VRFs, let us consider the loss of previous VRFs. This table shows the loss of most previous VRFs in the standard model, in some cases for reasonable parameterizations. Unfortunately, the loss in the last column of all these constructions is significantly worse than what is indicated by our bound. Next, let us look into why there is such significant difference between the tightness of reductions for VRFs and for unique signatures. For this purpose, let us recall the uniqueness property of VRFs that requires that there is a unique VRF output, why for each combination of input and verification key for which an executive proof exists. For the reduction, this means that it implicitly has to commit to all VRF outputs the moment it gives the verification key to the adversary. It can also not generate something like a lossy verification key that would allow it to indistinguishably forge VRF outputs because the uniqueness property also has to hold for maliciously generated verification keys. Hence, there is no room for the reduction to lie to the adversary without being caught. In consequence, this limits us to doing a so-called partitioning proof in order to prove the pseudo randomness of a VRF. In such a proof, the reduction randomly partitions the input space of the VRF in two disjoint sets, the so-called controlled set and an uncontrolled set. For inputs in the controlled set, the reduction can simulate answers to the adversary's evaluation queries, but it cannot extract a solution to the underlying hardness assumption if the adversary chooses the challenge input in it. For the uncontrolled set, it is exactly the other way around. This means that for the reduction to be successful, we need the adversary to choose all evaluation queries from the controlled set, but the challenge query from the uncontrolled set. If this is not the case, the reduction can only abort and output a random bit. Indeed, this strategy suffices to prove optimal tightness for unique signatures. The analysis of such a reduction in the computational setting, like for signatures, is straightforward. We only need to show that the probability for the reduction to not abort and for the adversary to be successful is non-negligibly large. Here depicted as the green part of the picture. In the decision setting, however, like for example, for VRF, this becomes more difficult. Let us thus consider this visualization. On the left side, we have the probability that the adversary wins, which we assume to be one half plus its non-negligible advantage. Now, we again have the event that the reduction does not abort. And obviously, if the adversary wins and the reduction does not abort, then the reduction is successful. However, this fraction is much too small to reduce to a decision assumption, like for example, DDH. We thus also have to consider that the reduction is also correct in half of the cases where it aborts and just outputs a random bit. Unfortunately, we are still not there because the reduction wins in less than half of the cases where it doesn't abort. And what we need is more like this, where in the no abort case, the reduction wins with much more. Showing that this holds has been rather cumbersome for most of the partitioning strategies with whole papers on optimizing this part of the proof, like the run by Belan Risenpart at EuroCrip 2009. However, we actually can achieve this optimal tightness and that is what we look into next. For the intuition behind our technique, let us slightly move the goalpost and assume that the adversary was a nice to choose all its queries and its challenge uniformly at random from the domain of the VRF. If this were the case, we could just guess the first log cube plus one bits of the challenge input, meaning the reduction would abort if the first log cube plus one bits of the challenge do not match the guess or if the log cube plus one bits of any evaluation query do match the guess. Then proving that this strategy yields a loss of only eight cube is rather straightforward and we refer to the paper for the formal proof. However, it only holds in this very simplified scenario and the adversary usually won't be that nice. So in order to use this strategy, we have to use some further trick. This trick is to pass all inputs through PRF first before comparing them to the guest prefix. Then even if they are adversely chosen, they will be distributed computationally indistinguishable from independent uniformly random values. However, this evaluation has to happen outside of the view of the adversary and has to be incorporated into proof of correctness. Fortunately, one of Yamada's VRFs from crypto 2017 allows us to do exactly that. More specifically, it allows us to embed an arbitrary net circuit in the VRF such that the reduction can simulate an evaluation of the VRF if and only if the circuit evaluates to zero and that it can extract a solution to the underlying hardness assumption if and only if the circuit evaluates to one on the challenge input x star. This embedding hides some chosen input bits and also the internal state and the output bit of the circuit from the adversary. Thus, we can embed the evaluation of the VRF and the comparison of the guest by setting the PRF key and the guest as the secret input bits. Then the partitioning proof we discussed before works out without any flaws. Yamada's VRF is based on the QDB-DHI assumption where the Q is exponential in the circuit depth, meaning we may only use logarithmic depth circuits and even then the assumption abated to being non-interactive is still relatively strong. However, proof and key sizes are modest but certainly not as good as the construction designed for efficiency. Nonetheless, this construction allows us to achieve a loss that is at most 8Q and thus attain an up to a constant factor optimally tight reduction. However, there are still some interesting research questions remaining. First of all, it would be interesting to apply these techniques to prove more constructions optimally tightly secure but also whether we can strengthen the lower bound to also apply to interactive assumptions and larger classes of reductions. A good starting point for this would be the meter reduction technique by Morgan and Pass for unique signatures from TCC 2018. Finally, achieving optimally tight reductions from weaker or maybe even standard assumptions would also be nice, since our current construction only works from the QDB-DHI assumption, but there exists VRF constructions from standard assumptions like the run from Hofmann and Jager to TCC 2016, all the runs by Rossier or Kohl from 2018 and 2019. In conclusion, we first showed that the lower bound on the loss of many public key primitives by Wader et al. from Eurocup 2016 also applies to all VRFs in the standard model. Thus, every reduction from a non-interactive complexity assumption to the security of a VRF necessarily has a loss of at least Q. We then showed that this bound is tight by presenting VRF with an accompanying partitioning proof that achieved this optimal loss up to a small constant factor. The technique builds on homophically evaluating a PRF and might also be applicable to more scenarios where this is possible and partitioning proofs are a suitable technique. Finally, if you have any questions, feel free to send me mail or contact me during the conference. Thank you all for your attention and have a nice day.