 Hi, my name is Ignacio Pascudo from the ASAW Institute in Madrid and I'm going to present this joint work with Bernardo David from IT University Copenhagen, which is called Albatros. So as you can see from the title and also the embedded image in this picture, our paper is about random coin toss protocols, essentially. We are motivated by the notion of a random beacon that periodically outputs randomness to parties in certain protocols. More interestingly for us, we are interested in this application to prove of state blockchains where we need randomness every now and then to run the lotteries that will select a leader for the next block. In such applications, we want beacons with public verifiability and guaranteed output delivery. And there are a number of ways to go about constructing such beacons based on different printographic primitives. So, for example, we have verifiable random functions that allow to construct beacons with very little communication, but such constructions are biasable to some extent by the adversary, by a strategy that is that the adversary may not communicate the output of such an evaluation of such function. And later we will see some related or similar problem in a different context. On the other hand, we also have verifiable delay functions which are not subject to this attack, but it is difficult to find practical parameters for these constructions because they rely on timing assumptions and one needs to find a set of parameters that is both efficient and has guaranteed security. Another way of constructing such beacons is based on publicly verifiable secret chain schemes and this is the one that we are going to focus on in this work. So, just to mention a little bit about the model or to be a bit more concrete, so we have n-parties that want to create some uniform random element in some finite set and that are assisted by a public bulletin board, so we can publish information in this board. And then we have an adversary that corrupts f2t parties in the protocol and what is relevant here is also that the adversary is rushing, so the corrupted parties may be the last to speak in a given round. As we said before, we also have the requirement that the protocol should be auditable by external verifiers. So, in order to understand how a publicly verifiable secret setting comes in the picture, let's first see what the problems are with more elementary approaches. So, in order to establish a random output, we could have that each party chooses a random element in some finite group and then outputs the sum of these elements if the group is within arbitrary group. So, then the output is uniform, if at least one of the inputs, the chosen values by the parties is uniform in random and independent from the others. Covering the situation where the adversary is rushing, this is not possible to do because the adversary will just wait until the honest parties announce their inputs and then decide on the input of, say, one corrupted party and completely fix the output that the adversary wants. So, instead we could think that each party commits to their inputs and then only once all have committed, then the commitments are open and we again output the sum of the elements. Now the adversary cannot any more fix the output that he wants because he doesn't see the inputs of the honest parties when he decides on the inputs of the corrupted parties. However, what the adversary can still do is to decide not to open some of the commitments of the corrupted parties after seeing the honest openings of the honest parties. And so, if we define the output of the protocol to be just the sum of the opened values because that's the only thing that we know, the problem is that the adversary can just choose between two to three possible outputs by just waiting until all the honest parties have announced their openings and deciding which subset of corrupted parties will open their commitments and which will not. So, we can also not say that the protocol aborts if at least one party aborts because then we wouldn't have granted output delivery and also, even if the protocol terminates, we are not sure if that is because everyone is honest or because the adversary just happened to see that the result that was going to happen was beneficial for him. So, to fix that, we introduced publicly verified secret sharing and we all know what secret sharing is. It's a way by which the dealer can distribute a secret among a number of parties. So, she sends a share to each of these parties and this is done in some way that prescribes sets of parties can recover the secret from the shares while other subsets of parties can have no information about the secret. In publicly verifiable secret sharing, this is done by so the delivery of the shares is done by encryption. So, basically all parties that are going to receive shares have a public key and a secret key and the dealer well, grips the ith share under the ith public key. This allows the dealer to create a proof that the sharing is correct. So, for example, if you think of Shamil's secret sharing the proof would just say that encrypted shares are indeed evaluations of a polynomial of certain degree in certain points. While also the parties that are going to reconstruct the secret later on they can prove that they are doing that correctly, that they are decrypting the shares correctly and combining them correctly. And once we... So, how do we use PBSS to construct a random beacon? Well, we simply replace the commitments before or we implement the commitments before by PBSS, the values. So, that allows us to determine at the point where everyone has already committed we can determine now a set of correctly PBSS values. And this is going to be the set of outputs that we will sum to get the final output of the protocol. So, and then in order... I mean, when the parties open the commitments if a party that correctly shares their value doesn't open these commitments this value can still be reconstructed by the rest of the parties because it was a correct theory. So, Uroboros used this type of beacon by using the PBSS by Schumacher's which is secure for any honest majority. So, some details about this PBSS since we are going to be based on it. Basically, we use a cyclic group of prime-order Q where the DDH assumption holds and the parties that are going to receive the shares they all have... they all choose a secret key in set Q and they publish H... publish generator H raised to that value. And now the dealer, in order to share a random secret so she chooses a random polynomial of degree at most T and she publishes the i-th public key raised to the i-th evaluation of its polynomial for all i. Now, she can also prove that this is done correctly that the exponents here are actually the valuations of the polynomial of the right degree by just committing to the coefficients of the polynomial and this is done by using some other generator of the group and once she has posted all this G raised to the coefficients of the polynomial, everyone can compute G raised to the evaluation of polynomial in any point and by using this, then in the discrete logarithm equality proof we eventually prove that these exponents here are valuations of the polynomial of the right degree. To the crypt, well, we need T plus one parties we need T plus one honest parties but we have those because we have an honest majority and the parties simply decrypt the shares meaning that they obtain these values H raised to the evaluation of the polynomials and they prove that they have computed these values correctly and then anyone can combine these shares in order to create the H raised to the evaluation of the polynomial in the point zero and this is going to be defined as the secret and this is done by Lagrange interpolation in the exponent since the evaluation of the polynomial is a linear combination of T plus one and once we have done that so basically several parties will have correctly shared their values and then we have reconstructed either by having the parties just open their values or by having the rest of the parties reconstruct these secrets so we have constructed several values of this form and then in order to output basically we just output the group operation applied to all these values and this works and we are interested in this paper especially in the computational complexity of the process and this beacon requires O and Q exponentiation in G per party if we say that T is order N it's true that the exponentiation are maybe not full exponentiation in the sense that most of these exponentiation have bounded exponent however when N is large this really becomes almost like O and Q random exponentiation in G so to speak in another paper Bernardo and I had this construction scrape where we modify this proof of correctness of sharing and we can bring the complexity of the beacon to be O and Q exponentiation in G per party note that in both cases scrape and Schumacher's PVSS the output of the corresponding beacon is just one element in the group in this work we present several contributions to this line of work and the first one is that we relax this correction threshold and we will consider T to be some constant times N but constant is smaller than one half and then we can show that we obtain very nice amortized computational complexity improvements and this is based on two techniques the use of Paxami secret sharing with some certain modified reconstruction schedule that I will explain later and the other is the use of nice randomness extraction technique based on T resident functions the second contribution is actually applies to any value of T smaller than N half so it can be also applied to the previous beacon and is that we improve the step of proving sharing correctness and this is a concrete improvement it's not asymptotical but it is considerable and the third improvement is that we construct two versions of the protocol that are secure under the universal complexity framework in order to explain the first contribution we will say that L is a value such that T is smaller than N minus L divided by 2 so basically L will be a linear function in N and then we modify the procedure of sharing by taking instead of Shamir secret sharing we take Paxami secret sharing where the dealer chooses a polynomial of degree T plus L minus 1 and then the number of secrets will be L so the sharing is basically the same the reconstruction we need T plus L parties but under our assumption we have those number of honest parties and now it works in the same way so basically the parties decrypt their shares and then they apply Lagrange interpolation index exponent to reconstruct each of the L secrets which are now defined to be H raised to L evaluations of the polynomial that are in different points than the shares and now this has a problem because computing these values requires ON squared exponentiations because every coordinate there requires ON exponentiations to be computed so it seems that we haven't won so much however we can notice that if we have already had the constructor that has computed all these values then any other party that wants to check the correctness of this reconstruction can do so with only O of N exponentiations by using some technique that we introduced in the scrape that basically consists in choosing a code word so basically if you look at the polynomials evaluated in the secrets and in the points corresponding to this T plus L shares that forms a code word in the code given by all the valuations of polynomials of degree at most T plus L minus 1 so you can now take a random code word in the dual of that code and it has to happen that if you compute this expression here this gives you the identity and if you compute this and it gives you the identity with very high probability it will happen that these values here are indeed the valuations of the polynomial in the right points so the way that this plays out in our beacon is that for each secret vector to be reconstructed we will choose a random small committee of reconstructors so one point here is that at this point when the shares have already been decrypted everything can be already computed the output can be computed so we are just trying to find a way of computing it as efficiently as possible so for that we choose a random small committee as I said of log N parties and then each of the parties in that committee will reconstruct the secret value then the remaining parties can check if there is at least one of these reconstructions that is correct if none are correct then that means that all reconstructors were corrupted and then the remaining parties can just compute this secret vector by themselves but this will only happen in a small amount of secret vectors the other improvement that we have is the randomness extraction so after the reconstruction phase we will have reconstructed as I said before the M vectors of this form and these are the vectors that were shared by all dealers that shared the vectors correctly but the adversary may have decided on T of these vectors so we want that the result of the computation is not biasable so it should be uniform condition to what the adversary has decided so in order to do that what we will do is to apply in every coordinate of these vectors we will apply a linear T-resilient function these functions were introduced by Chor at all in 85 and that means that if this linear function is given by a matrix M we will apply M in the exponent to all these values and we will get some other values and then if we do that in every coordinate we will define that to be the output so now this T-resilient function I didn't say it but what it does is basically what we need so basically it yields an output that is uniformly random as long as all that T of the coordinates in the input were uniformly random so it doesn't matter that the adversary has fixed T of them in fact if we have T-resilient functions given by transpose of random on the matrices this allows us to have an output of size if you look here M-C so that means in the end that we can have an output size of O and squared so in fact we can even take a matrix that is also Van der Monde not only transpose is Van der Monde but the matrix itself is Van der Monde and then we can apply fast Fourier transforming the exponent to compute this and this gives us very nice complexity so all in all what we have in the end is that our construction has a request N-squared log N exponentiation in the group per party to produce an output of size N squared so that means log N exponentiation per output and per party in the best case where we don't need to enter this reconstruction phase because everyone has or every party has revealed their committed values then we have a number of exponentiation per output and party to be constant so this is three orders of magnitude below O and that fits very well with the nomenclature in golf because it is in albatros apart from these asymptotical improvements in the computational complexity which apply to the case where T is a smaller fraction of N and we also get an improvement concrete improvement in the case where T is any dishonest minority so basically if you remember what the dealer needs to prove is that the encrypted shares are of this form so public key of i-th party raised to the i-th evaluation of a polynomial of the right degree T plus L minus 1 and the proof that we introduce here is actually quite standard or I mean very similar to the usual sigma protocols so basically the dealer chooses another polynomial another random polynomial of the same degree and constructs the values that are the public keys raised to the evaluations of that polynomial now gets a challenge usually by Fiat Shamir and she now just publishes the polynomial sets which is e times p plus r and so basically this works because now the verifier can just check that this equation here holds for every i for every share and by checking this I mean this guarantees that sigma i are constructed in the right way and this avoids the discrete logarithmic quality proofs that we had before and leads to actually a better concrete complexity not an asymptotic improvement but of any order but it just shapes off a constant the third contribution that we have is to construct two versions of our beacon that are secure under the universal composability framework one is based on non-interactive signal knowledge for discrete logarithmic relations so you see secure implementation of that and the second is based on a notion of using secure homomorphic commitments that have designated verifier so basically that means that the commitment is sent to the synaptic verifier and this verifier can later open its correct opening to third parties so it is interesting actually that this second construction only requires to rely on the computational Diffie-Hellman assumption and not the decision of Diffie-Hellman assumption so it's a weaker assumption so that was it, thanks for watching