 And Stephanie Bayer is giving the talk. Hello. I want today give a short overview how one can construct an efficiency of a knowledge argument for correctness of a shuffle. And this is joint work with Jens Kroos from University College London, and I will start with some motivation. And I picked as a motivation e-voting. We all know in a vote, the voter cast the secret votes, and in the evening the authority reveals this vote in random permuted order. And in this e-voting scheme we are looking at, at the moment, the voter cast their votes on a computer, and sends the vote to a server. And this server sent in all votes to a central authority, and the central authority reveals the votes in random permuted order. So obvious to have a safe and secret vote, we have to decrypt our votes. So we will be using algorithmic encryption in our setup. And the property we are most interested in is this homomorphic property. It means if we have two ciphertexts and multiply them together, we get a new ciphertext, which contains the product of the original messages. And with this we can define the re-encryption operation. So what we want, we have a ciphertext. We want to get a new ciphertext, which contains the same message, but looks completely differently. And we can do this by multiplying the original ciphertext by a random encryption of one. And we use this re-encryption to define a shuffle. Here we have a bunch of input ciphertext. And we pick a permutation and permuted input ciphertext. And then we re-encrypt them by multiplying this encryption of one to it. And this gives us output ciphertext. And because we use this re-encryption, the output ciphertext looks completely different to the input ciphertext. And because we permuted some ciphertext, we don't know which original input ciphertext belongs to which output ciphertext. And this server in our setup, so we vote, we send this vote to the server, shuffle the votes and send this to the central authority. And to make this more secure, we use a mix-net between it. So we have our voters, the voters' votes, encrypt their votes, and send them to the first server, who picks a permutation, shuffle everything, and send it to the next server. This next mix server, again, picks a permutation, shuffle, and send it on. So we're using a lot of mix servers here. And in the end, everything gets decrypted using threshold decryption. So threshold decryption here ensures that all parties have to work together and nobody can break anonymity. And we see the output here as the original messages, which we inputted in the mix-net, but they are in permuted order. And this permutation pi is a product of all permutation used in the mix-net. And that means each server knows only part of this permutation and nobody knows the whole permutation, so nobody can link a message to a person. But what happens if one mix server is corrupt? So let's assume the first mix server is corrupt, and instead of shuffling everything, he replace some votes with some new ciphertext. So nobody can see that he replaced vote. This shuffling operation normally prevents people to link input to output, and if we replace some ciphertext with new ciphertext, this looks completely different to the input as well. So what's happening in the end after the next shuffling? We see after the decryption that part of the messages are replaced with something new, and if we are in this voting scheme, that means that this server can change the outcome of the election. So we have to prevent this. And to do this, we force all mix server to send together with the output ciphertext, a zero-knowledge argument. And this means if the next mix server accept this, we know no message is changed because a zero-knowledge argument should be sound. And we also know in the end after the decryption, so we know after the decryption all messages are the same as we input in the mixnet, and we also know that the permutation is still secret because we are using a zero-knowledge argument, and the part of the permutation, this pi1 is still secret, the pi2 is still secret. Let's have a closer look at a zero-knowledge argument. So here we have our approval and our verifier. Both knows the public key for the encryption, the input ciphertext and the output ciphertext. And the prover also knows the permutation and the randomness he used in the shuffle. And they can talk back and forth, and the verifier asks challenges to the prover, and the prover has to answer them, and then the answer verifier should accept and say, yes, the shuffle was done correctly. And what we want, obviously, if the prover does everything correctly, the verifier accepts. And we want, if the prover tries to cheat, the verifier rejects this overwhelming probability. And the easiest way to get soundness and correctness is for the prover to give the verifier his permutation and the randomness he uses. And the verifier takes this in the input and constructs the output himself and then compares the original output and his new output, and if they are equal, he accepts. And that means we have to open the permutation and the randomness. And so the verifier learns the part of the permutation and we have a lot of mixed net, and everybody sends a zero knowledge in an argument that he's done everything correctly so in the end everybody learns the permutation and that contradicts the anonymity. So that's the reason it's important to have a zero knowledge proof so that nothing but the truth is revealed and that the permutation and the randomness is still secret. And we are looking at real-life applications so it should be efficient. So we want to have small communication and small computation cost. In e-voting we want to have the results as soon as possible after the pooling station close and not days, weeks later. So more precisely what we are doing, we have the prover verifier statement and both knows also the setup of the group we are using for the algorithmic version and some common reference string. So verifier picks the challenges randomly uniformly from ZQ so we are in public coin and we are constructing an honest verifier zero knowledge argument. It means as long as a verifier is honest he learns nothing. That's not what we want in real-life but it's possible to convert a zero knowledge argument in a standard zero knowledge argument with only constant overheads. And our contribution is now a honest verifier zero knowledge argument for correctness of a shuffle in the common reference string model which takes nine rounds. And complexity is for n equals m times n ciphertext we have sublinear communication m plus n k bits so for instance if we pick this m equals square root of this n we get a complexity of square root n our prover takes log mn exponentiation and the verifier has some exponentiation with a linear in this n. Let's compare this to form of work. So we see that our verifier takes for an exponentiation and this is as low as we can get using algorithmic encryption and if we are not using any special tricks to calculate the exponentiation we see our prover takes to log mn exponentiation if we have constant rounds so this is not really optimal we want to have something linear here but if m is not chosen too big we are still in the same range as form of work and even if m is a little bit bigger we have a very quick verifier so in total time of a shuffle argument is in the same range as before and we can also get the cost of the prover down to something linear if we allow more interaction between the verifier and the prover and we also see that our communication cost is lowest so far, nobody achieved this and there are few reason why we got this so first is the kind of commitment we used so what we want we have one vector and we want to commit in one single argument and this single argument should be smaller than the original vector so we want to have lengths with using commitment scheme this commitment scheme should also be computational binding and perfectly hiding and this commitment scheme should be homomorphic so if we take two commitments to two different openings and we multiply them we get the commitment to the sum of the openings and the commitment scheme we used to is the general Peterson commitment so the public key here is the H and the generators G are in a group and we have a randomness here and these are the elements we want to commit to besides of this length with using commitments and batch verification which was done before in shuffle arguments we used a special kind of challenges instead of picking random challenges the verifier picks just one challenge and verifier and prover both construct this vector of fundament vector and this helps to reduce the cost dramatically so this gives us a sublinear communication cost so where do we use this so this is our situation both parties know input and output ciphertext the public key for encryption the public key for the commitments and the prover knows also the permutation pi and this randomness R1 and the prover now wants to convince the verifier that he has done everything correctly that he permuted the input ciphertext and then re-encrupted them and to do this the prover first commits himself to this permutation by committing to the values 1 to an impermuted order he then gets a challenge x from the verifier and then the prover commits himself to the permuted wonderment challenges and this is the same permutation as he used here and he then gives an argument that both commitments are constructed using the same permutation and he knows this permutation and he also gives an argument that the output is constructed correctly and using this permutation here and he knows these re-encruption factors to do this in more detail the prover first commits himself in a as a commitment to the values 1 to n and we call these values 1 Ra to an and he also commits in b to this permuted wonderment challenges x to the pi, x to the pn which we call bi to bn and he first gives this product argument for imb such that both these products are the same and we see the ii as a permuted values 1 to n and the bi is also permuted wonderment challenges and on this side we have only i and xi so if i, i and bi are permuted with the same permutation these two products should be the same as this is just some polynomial where the roots are permuted this is very inexpensive so please see full paper for details and secondly the prover gives also a multi-explanation argument such that the product of the output ciphertext to this permuted wonderment challenges equals the product of the input ciphertext to the xi times some re-encryption where this row depends on the randomness proven used this is expensive so I will try to sketch the idea here and we will focus only on soundness zero knowledge is something which is very easy and cheap to add on top of it and we will assume for a moment that this row equals zero so that means we don't have this factor here and we can think of the situation that we only permuted the input ciphertext to get the output and never re-encrypted them that means this side and this side has to be the same and we will focus on this idea so some notation our big commitment B contains of commitments B1 to BM where each commitment where B1 is a commitment of the first and element of the permuted wonderment challenges and the last commitment of the last and element of the permuted wonderment challenges we will arrange our ciphertext in an m times n matrix so we get m vectors of ciphertext and having these vectors B i here and the C i's here we can construct an inner product so if we take a vector C i to the power B j this equals the product of the entrywise exponentiation and this simplifies the left side multi-explicitation statement this is the left side to this product which runs only over m elements and we call this C so now to the main idea we have here our vector C1 to the m and we have here our permuted wonderment challenges and we can construct this matrix out of it each entry is a C i to the B j and we see if we take the product along this diagonal we get our C so this is our statement we want to prove the prover can also calculate the product along the parallel diagonal so to get the e k and the e minus k and the prover now send this value to the e k to the verifier and get the new challenge back challenge wire and the prover first opens his commitments to this vector so the commitment to this product of commitments to this vector this should convince the verifier that the prover knows what's going on that he knows what's inside the B and the verifier then computes this vector C which the verifier knows the output so he knows the vector C i so he can calculate this product and the verifier checks if this vector C to this B equals the C which can be constructed now for simplicity out of the inputs so that's the input ciphertext to the wonderment challenges times this product of the e k and we're using wonderment challenges why here and why here and we have wonderment challenges inside of the e k so a lot of this value cancel themselves out and we can see with basic mathematically transformation that this left hand side should equal the right hand side and if this is i equal the verifier accept that the prover has shuffled correctly and we see this is efficient we only need two m ciphertext here the verifier sends only two m ciphertext here and n cipher elements in set q here and for the verifier the verifier needs n ciphertext exponentiation to construct this vector C and n ciphertext exponentiation to construct this C and two m ciphertext exponentiation here so we get in total the sublinear communication cost which depends linear on m and n and the verifier computation of the 4n so each ciphertext exponentiation consist of two q element exponentiation so we get to 4n plus something sublinear so what about the prover if we try to construct this matrix naively we construct each entry so we need m exponentiation for each entry we have m squared exponentiation in total so we would end up with m squared times n equals m times n ciphertext exponentiation that's quite expensive but luckily we are only interested in this product along the diagonal so we don't have to compute the whole matrix we can use techniques for multiplication of polynomial to use them in the exponent of the ciphertext and we looked at fast Fourier transform and this gives us this log m and exponentiation for the verifier and this doesn't add any extra round for the prover because this is one operation for the prover or we can use more interactions so verifier and prover talk more with each other then we need log m more rounds but the number of exponentiation are linear and we tried, we implemented to see how our argument behave, how quick it is we used c++ and we used the NTL library by scope and the GMP library and we looked at different level of optimization multi exponentiation techniques and fast Fourier transform and for our range of parameters the multi exponentiation techniques works better if m is small so it takes a while till the asymptotic of the fast Fourier transform kicks in and that's similar to if we want to multiply polynomial the polynomials have to be big before fast Fourier transform kicks in and we also looked at this extra interaction and we used to cook to multiply parallels of the matrix and last I want to compare our work so we compare it here with the verificatum library by Wigström and we looked at modular arithmetic where our group has a order of 160 bits and we looked at 100,000 ciphertext and for this result we choose m equals 64 and having two extra rounds of interaction and we see our argument takes two minutes to verify a shuffle so verificatum library takes around five minutes so this is a special parameter which gives us a best value so we can't say we are two times quicker than verificatum it depends on some variables verificatum is written in Java we use t++ but both arguments are usable in real life application I mean that's quite quick for 100 ciphertext and it's not a powerful machine but what can we say when we look at the argument size our argument beats the verificatum argument by a factor of 50 so that's much much better in a setting where we want to have a very small argument size our shuffle argument is more efficient than verificatum and also former works before so that's all I want to say thank you for your attention