 So yeah, so this talk is about the efficient shuffle arguments. And I would start with a little bit of motivation. So this motivation is electronic voting. So let's say we have some number of voters. Voters send their votes to a polling board. So this is paid to some immutable storage system, either blockchain or something more old fashioned. From polling board, the votes go to a mixed network. And purpose of this component is to basically anonymize the votes or disconnect the sort of the vote from its destination. And from there on, votes go to some sort of a threshold of the encryption service. And we get the tally. So we will focus on this central component in this talk. So as I mentioned before, our goal here is to get anonymity or you may also call it location privacy that essentially don't want to reveal the origin of the ciphertext. And for this, we can use a mixed network. So it consists of a number of mixed servers, like of mixers. So voters send the ciphertext to the first mixer. Mixer permits the ciphertext. Of course, this is not enough because you can still connect just by looking the output. You can connect the input and output. So you also have to blind the ciphertext, meaning that you randomize them. And the following mixers repeat the same process until you get some ciphertext that are untraceable to the original ciphertext, given that at least one of the mixers didn't reveal its secret reputation. But what if one of the mixers is malicious? So it turns out that this mixer can completely rep resist. So for example, you can generate this own ciphertext. In that framework, you would have no way of finding out that something went wrong. And the solution for this is a zero-knowledge argument. So essentially, if we want that each mixer without the proof, that preferably this should be a noninteractive proof that you can verify maybe even a long time after those mixers are gone. So what's a noninteractive argument? So we have two vectors. We have a prove and we have a verifier. Prove generates a proof. And if proof is honest, then that's very fair if it should also accept the proof. So noninteractive argument should be zero-knowledge, meaning that it doesn't leak any information besides the validity of the statement. It should be sound, meaning that proof can't prove any false statements. And sometimes additionally we want the noise soundness, meaning that if verifier accepts the proof, then proofers also know the witness for his statement. You should also choose a security model. Unfortunately, we know that in the standard model, you really cannot get a noninteractive zero-knowledge unless you work with trivial languages, which is not very interesting. You can get a noninteractive zero-knowledge in random model and actually have many efficient shuffle constructions in random model. So this is not very interesting for us. Also, there are some, at least some special, yes it's known, for random model when it breaks down. So instead, we will consider a common reference string model in this talk. So what is this common reference string model? CRS model for short. So we have a proof on verifier. And we have additionally a trusted third party. So this trusted third party generates, for us, common reference string. So this is related to some big string with a very specific distribution. And now using this CRS proofer, first proofer can leak a statement. A proofer also will know a witness for the statement. And now using witness and the CRS proofer will generate the proof and verifier will find and either accept or reject this. OK. And what our research question was here is that how to generate, how to construct an efficient noninteractive shuffle argument in the CRS model? So what's the state of the art at the moment? So least efficiently wise, there are two results. There's a result by Fauci and Lipma from CTRSA that achieves the fastest proofer. And then there's a result from last year's Asia Group by Fauci, Lipma, and Zayac that achieves the fastest verifier. So if you would have to choose by efficiency wise, then you would really have to think which one do you want to be faster, at least currently. Of course, these parameters are not all that matters. Also what's very important is what type of cryptosystem you use. So the CTRSA result used, for example, Eglmal, which is very standard. Whereas Asia Group used Eileen cryptosystem, which is probably another thing if you haven't heard about it, which is also a lifted cryptosystem, at least the way they used it, which means that when you degrade, you also have to complete the discrete logarithm, which in turn means that you can't have very long messages. So what do we achieve? We basically are able to improve on almost all the parameters that we consider here, except for the CRS size, which is somehow in between the two of the results. OK. And also have implementation. So there have been quite several of this CRS model of the shuffle papers now. But as far as we know, there have been no implementation. So you can finally see how it actually performs. But more about that later. So as the title of this session might indicate, we will use Perich. So some basic definitions. So we have three binary groups, g1, g2, gt of size b. b is a prep with respect to generators b1, b2, and bt. We lose additive notation in all groups together with bracket notation. So this you also saw before. So basically, instead of writing a times b1, I will write a bracket 1. And then we have a binary map meaning that bracket a paired with bracket b gives you, gives me a product in the third group, 8hp. So also what you heard just before, and what we also have to use here, is generic binary in a group model. So in this model, we essentially assume that adversaries kind of blind to that concrete structure of the group elements. So he does not look at the individual bits and so on. And he will only do generic operations, meaning that he will do addition, he will do bearing operation, and he will do required checks and nothing more. So what does our shuffle argument look like? Essentially, we have three components. First, we have a fermentation matrix argument. OK, fermentation matrix argument. So essentially, we commit to a fermentation matrix, and this argument will show that that commitment actually contains a fermentation matrix. And this is actually based by square span programs. Over the output, the net is at the top. Then we have a consistency argument which essentially says that the presentation that we committed to was used to shuffle the ciphertext. And this is a result by Grof and Lu, or not quite the result, but it's based on their ideas. So unfortunately, in both of those arguments, we actually need to use different commitment schemes. And to kind of connect them, then we have a same message argument, which essentially tells you that those commitments are to the same message. And in here, we use a recent result by Gilson V, which is a proof for soft linear spaces. So and now a little bit in more detail. So first is the fermentation matrix argument. So what's the fermentation matrix? It's basically a matrix where on each row and each column, we have exactly 1, 1. And all the other entries are 0. And of course, if you multiply this matrix by a vector, then this vector per field of mu. So we will use vector commitment schemes with polynomials. So we have some polynomials, Pi, that are linearly independent. And the commitment will look as you see here. So these Pi polynomials are evaluated at random point. And also this row is some random point. And all of this is in our CRS. And so a commitment to a vector A, that we multiply the respective coordinate of the vector by Pi. And then we have some randomizer that hides the commitment. OK, and then we commit to each row in this fermentation matrix. And so as I mentioned before, what we want to show is that the commitments actually contain a fermentation matrix. And actually, in all those components, these components are actually already well known. So what's actually known in this result is that we take those components for these puzzle pieces and put them together. And it turns out that this gives you a next result. So for example, here, this is actually this Asia Group 2016 result. Or very similar to this one. And similarly, as stated, we will prove it in a generic, but in a group model. Since we also have a new CRS, then we have to prove it again. In very short, essentially what they do is they have a unit vector argument. We choose a square but span programs. And to get a fermentation matrix argument out of this, you basically just need to check that, OK, you have all vectors, a unit vector, and they sum together as an all one vector. And this is actually enough to get a fermentation matrix. So then we have a consistency argument. So the point of this argument is to show that the commitment that we had, this is actually used to terminate the cyber base. And as I mentioned before, this main idea comes from Grotten-Lew paper, which was the original CRS-based Java paper. But the unfortunate thing is that if you use the same polynomial commitment scheme that we had before, then the suddenness will not quite work out. And so we have a similar commitment scheme. But with a different polynomial we had that we have here. And essentially these have to be linearly dependent to the previous polynomials and there are a few other requirements. And OK, then we again commit to each row. And then we use a similar idea as in Grotten-Lew paper to actually, so basically we have to, in this argument alone, we make the assumption that, OK, suppose that these A hat polynomials or these A hat commitments are to have permutation matrix. And then from that it follows that, OK, this shuffling was done correctly. But finally, we have now these two parts. So first, for each row we have two commitments. For first commitment, we know that this contains permutation matrix. For the second commitment, we know that, OK, if all the commitments together form a permutation matrix, then this was actually used for shuffling. But now we should kind of connect those two. And so here we can nicely use the Kiltzwe result, which is a quasi-adaptive non-directive zero knowledge. The quasi-adaptive means that there's dependency between the CRS and the language. And yeah, so the result is it actually allows you to prove that the rule basically for solve linear space, that's something in a sublinear state. So in our case, this matrix M that we have in here, it would be actually those commitment polynomials together, these raw values. OK, so completeness and zero knowledge in this case follows easily from the Kiltzwe result. So this Kiltzwe is summed under a certain kernel MDH assumption. Unfortunately, the thing is that we also need knowledge and soundness here. So basically, I don't want to go too much into detail here, but this consistency argument achieves a notion called comfortable soundness. And you get rid of this notion for the whole shuffling argument, then we need knowledge and soundness in the same message argument. And here we also will use a generic binary group model. OK. And finally, a little bit about our implementation. So we use for our implementation Lipsnack library. This is also used to implement the synopsis in the Zcash cryptocurrency. It's a C++ implementation. And you see quite practical efficiency number. So in this first column, we see that for 100,000 cipher decks, the prover time is around one minute. And we can also separate both proving and verifying into two phases. So we can have an offline phase, which you can basically do before the voting starts. And then you can have an online phase, which is when the actual votes come in. So then what we see, that this online phase takes only 10 seconds. And for verifier, so time is about one minute and 30 seconds. And the online time is around one minute. And this was all done on my computer page. So in real life voting, you would probably not write on my computer, but I'm quite proud of the server. And this 3,300,000 is interesting, because that's about as many people you have voting in Estonian elections. And if you are interested in checking the implementation out, that's a thing. And this is actually all from me. Thank you. Can you say something about the computational complex for the proven verifier? So first of all, everything is linear, but I guess that you know. So on this very beginning, I had some table. So we basically used some units, which is like we counted the number of explanations and bearings, and gave some rough numbers to somehow compare it. Yeah, so you can kind of see the relative speech here, but I actually don't have, I think in the paper, we have actual numbers that you have this amount of bearings and that in many explanations and so on, but I don't remember them from the memory. OK, so you mentioned this as an alternative to different shuffle arguments. I was wondering what sort of your opinion in terms of security, so you were facing this in the generic group model, compared that to say, El Camaro shuffles and the oracle model or something like that. Which one would you choose? So that's a hard question. And so one challenge with this zero space shuffle is that you also have to do the set-up phase somehow. But I mean, there are many things to consider. So one thing, for example, what I hear from practical people is that latency is actually a big problem in this field. Yeah, but I mean, this deficiency is why you can easily use it. So yeah, I mean, I think it's from the implementation, you can see that you get efficiency that is good enough to use in practice. So I mean, I think you could use this easier. Any questions? We don't seem so. Thank you very much.