 So we move to the second talk, which is called Shuffle Arguments Securing the Degeneration Model by Prasoudi Fosey, Elger Lipma, and Michel Zajac, and Michel is giving the talk. Yeah. Hello, everyone. Let me talk about our result, Shuffle Argument, Securing the Generic Billionaire Group Model. So what we actually basically is a new efficient CRS-based non-interactive zero-knowledge shuffle argument. And this argument is over four times more efficient in verification than previous work. And here, we argue that verification time is much more important than proof time. It's because usually in the non-interactive zero-knowledge setting, you usually proof once something, and then you probably need to verify many times or by many, many participants. And our soundness proof holds in the Generic Billionaire Group Model. Now, this is quite complicated. The proof is quite complicated. Maybe there's no high math in here, but the level of complication comes from the huge number of polynomial equations. So we use the computer algebra system to solve it. Especially, we needed to go through Grubner basis. So let me talk about motivation for Shuffle Arguments. One of the motivation is e-voting. In the e-voting, we have a voter, say Bob. Bob wants to vote in some kind of elections. So he uses his computer to send his vote to some server. And then the server counts the votes, and everything is perfect. But as history tells us, it's no voters who counts, but who counts the votes, right? So MixNet give us two properties, anonymity and correctness. Correctness comes from the fact that data is public, and anonymity comes from the fact that data and source of the data is private. OK, so simple MixNet. We have a few voters. Voters send an encrypted vote to a MixServer. A MixServer then gets some random permutation along with randomness r. And what MixServer does is he gets input ciphertext permuted using permutation pi and randomize them using randomness r. Then the server sends it further to another MixNet server, say MixServer. And the next server does exactly the same. But of course, it chooses permutation pi and randomness at his own. Then everything goes to the some decrypting machine that knows secret key, and this machine publishes plaintext. Of course, we can have much more MixServers here. The security assumption here is that if at least one server is honest, then the security holds, then the data is private, and then it's correct. OK, so this setting gives us privacy against each individual server, because if this server is honest, then the next server knows nothing about which ciphertext comes from which user. But OK, what if server cheats? So here we cannot do nothing in this setting. So we need to change it a little. We give every MixServer additional public key, and we demand that every server proves that he has done this shuffling work correctly, so that he honestly output permutation of ciphertext. It means that, in fact, this D here re-randomized ciphertext C and send this proof with new ciphertext to the next MixServer. The next MixServer verifies all the previous proofs, shuffle ciphertext, and sends them on. And in the end, the crypting machine verifies all proofs. So here we see that the verification time is more important because every server proves once, but it verifies all the proofs from the servers that were in this line before him. So in this setting, we have both anonymity and correctness. So shuffle argument, we give efficient zero-knowledge argument of correctness of shuffling. As I said, the MixServer permits ciphertext, re-encrypt, and provides a proof that it was done correctly. But the problem is that in the CRS model, existing arguments were not very efficient. Here we have a small comparison. In this paper, we achieved much better CRS length. And it matters because this end here is a number of ciphertexts. So in the national election, for example, it can be like 10 millions, 20 millions, or more. In communication, we do much better. And here, compared to Fauci-Litma paper, we do a little bit worse in terms of prover's time. But on the other hand, we are much better on this verification phase. Furthermore, we achieve full soundness. And we work in the full generic blaner group model. Usually, shuffling argument is quite complicated. So usually, known assumptions about blaner groups are not enough. They don't fit well because they are usually quite simple and they don't give us the old properties we needed. So up to now, there is usually the case that the shuffled arguments use some quite new blaner pairings assumptions or even introduce a new one. So we get rid of these assumptions in some sort of way. OK, so I think this slide appeared a few time already on this conference. But let me remind you what is zero knowledge in a CRS model. So we have a trusted third party that produces CRS, common reference string. We have prover here and verifier here. So the trusted third party sends the CRS to both parties. Prover knows some instance of problem along with witness, while verifier knows only the instance of a problem. Then prover sends a proof pi that, in fact, x belongs to some language L. He proves that he knows witness w. And then verifier accepts or rejects. Of course, in every zero knowledge setting, we need simulator. In the CRS setting, simulator is given some additional power called trapdoor. And the simulator, who knows trapdoor but doesn't know witness, simulates the proof that x belongs to the language. So basically, we have three properties of every zero knowledge argument. This correctness, that means that the verifier accepts if the proof is correct. Soundness, that means that it's hard for a malicious prover to make verifier accept if x doesn't belong to language. And zero knowledge is defined by this simulator. OK, so let me tell you something about Billiner pairings. Billiner pairings. So we have three groups of the same order. We denote them by g1, g2, gt. We know generators of these groups. And we have Billiner map. It's a function from g1, product g2 to gt. And the requirements are that they are efficiently compatible, non-generate, and, of course, Billiner. And there are some assumptions about pairing. The very basic assumption is that inverting pairing should be hard. So given the pairing on A and B, it's hard to compute either A or B. So this is like equivalent to discrete law. Given g to the A, it's hard to compute A. But what else should be hard? It's not enough. So in a shuffle world, it usually looks like this. We have some protocol. Then we have a bunch of assumptions on Billiner pairings. There are hundreds of them, and many of them introduced in the last few years. But as I said, the shuffle argument is a quite complicated argument. So to make protocol efficient, we usually needed to introduce some bunch of new assumptions. And how to verify these assumptions? Usually, we verify them in a generic model. So it's like a minimal requirement for an assumption to hold in a generic model. So this setting is not bad if these assumptions are well known and well justified, or if we don't introduce many new assumptions. But in fact, here, we need to bend our protocol to make it fit to these assumptions. So it comes with loss in efficiency. So what we use in this work is a pure generic model. So we were able to simplify this picture to this. So we know that the generic group model holds only for some restricted adversaries. But on the other hand, it's very, very efficient. So in a generic billionaire group model, you have this assumption that this adversary has only restricted access to group elements. Like he knows group operation, billionaire map, and can do equality tests. And if he wants to compute some element in G, he needs to know two elements in this group that gives him this element. So basically, if there is an element in a group, in a billionaire group model, then we know the discrete log of the polynomial that creates this element. And well, in general, in a generic billionaire group model, we use GT as a generic group. Here, we use something called semi-generic billionaire group model because we don't handle GT as a generic group. And this comes from the fact that a lot of billionaire pairings has this group GT as some finite field. So it's hard to restrict access to elements of finite field. So how soundness in the GBGM looks like? So trusted third party chooses some number of random variables and produces CRS. CRS is a set of polynomials on these variables, and everything is given in exponent. Then adversary can get elements from the CRS and make any linear combination of it. And then what we do is we do some quadratic tests. We use billionaire maps, and we check whether our verification equation holds. So we need to make sure that every verification equation is zero. So in general, adversary chooses coefficients HIA, and then we check this set of equations. And we need to show that the solution we get, the only solution we get is somehow nice. And that means that adversary cannot deviate from the protocol. So when we constructed argument, we need to decompose it to the smaller building blocks. We decompose it to the subarguments. And we demand that every subargument is efficiently verifiable. So we make sure that every subargument is sound independently. And then, of course, it may be the case that we need some CRS elements to prove soundness of one argument. And the CRS used in every argument separately, it is not the same CRS. So we need to compose all the CRSs into one big CRS. But then if we add element to CRS, adversary becomes more powerful. So we need to check again whether this protocol is sound. And if it's not sound, we usually add a new random variable that will be used only in this subargument. OK, so our subarguments goes up follows. We start with permutation matrix argument. And in this argument, prover commits to some permutation and proves that he committed to it correctly. Then there comes consistency argument where prover proves that he used the same permutation as he committed to shuffle ciphertext and validity argument, where prover gives proof that ciphertext is formed correctly. So correctly means that the soundness proof holds. So in fact, he cannot a little deviate, but all together we make sure that he does. OK, so let's focus on the permutation matrix argument. So what's permutation matrix? I think it's clear. So we use two subarguments in this subargument. First, that the matrix is statistics. So the RAMS rows sum to one, one vector. And then each row is one sparse, at least that at most one coefficient is non-zero. OK, so one sparsity argument looks as follows. Prover commits to elements in a, it's basically it's a Pedersen commitment, but in exponent in this notation. Then comes argument from the square span programs. Proves looks as follows. And verification equation is just a quadratic equation that uses these AIs, some elements from CRS. OK, uses the proof given by a prover and then checks whether this equation equals zero. But of course, adversary can deviate from the protocol. So here's the form of AI given by an honest prover. But in fact, adversary can produce AI in a very different way, right? So as I said, he can get coefficients next to the every element from CRS as he wishes to. So he can produce his own AI in a very sophisticated way. The same comes with A2. And of course, he is not restricted to the previous way of getting pi. He can as previously get every linear combination of CRS elements he wishes. But still, the verification equation is out of his hand. The verification equation is like given in a system and it's checked by a verifier. So it's still the case that this verification equation holds. As I said, the idea of the proof is to make sure that here, let's say here, the adversary cannot come up with something different than honest prover. OK, so how solving system of polynomial equation looks like? So we need to find coefficients such that this verification equation holds. And we begin with noticing that next to the every such variable built from the product of variables, it's linearly independent. So we can focus on coefficients next to the such tuples. So we know that if the whole verification equation is to be 0, then every coefficient next to the such tuple needs to be 0 also. So this gives us a system of polynomial equations and the system is huge. Maybe 20 polynomial equations is not huge, but let me remind you that we are talking about sub-argument of sub-argument. So altogether, we have a lot of polynomial equations. So that's why we use a computer algebra system. So this is an exemplary system of equations. And here comes the solving. So we mix computer algebra system with manual labor. But that is probably we are not so good in coding. Maybe we can get everything from this CAS system. And we use linear independence of polynomials given in CRS to split some coefficients to get more and more equations, but simpler ones. We compute the Grumner basis and then solve the Grumner basis. And this can be, of course, done manually. But we use the computer algebra system. And finally, we obtain that this AI is in fact as honest prover would produce. Thank you very much. We have time for one quick question. In terms of communication or? OK. So let me begin with the fact that the Bayer-Groff is very nice protocol, but it's interactive. In terms of communication, I'm not sure how we compare to Bayer-Groff in terms of communication. I'm trying to go back to the table. Well, there is some problems with communication because in every case of shuffle arguments, you need to send over the web like millions of ciphertexts. So basically, the order of magnitude is still pretty bad. There's still a lot of stuff to do. We need to send. So we didn't focus it very much. I think I skipped it. So where it is? Well, communication is seven and plus three. So I don't recall now how this looks like in the Bayer-Groff settings. Oh, right, right, because they don't use sublinear stuff. Yes, so we do the math wars here. OK, let's thank Michel again.