 Hi everyone, my name is Khan. I'm a BG student at IBM Reset Zuri and ETA Zuri and today I'm gonna talk about smile Set membership from idea lattices with applications to ring signatures and confidential transactions So this is joint work with Vadim Lubashevsky and Gregor Seiler Okay, let's start with what the set membership proof is. It is pretty simple. So we have here We have an interesting example Suppose we have a set S which consists of four socks the green one red one blue one and the yellow the orange one And we want to prove that we have a sock which is in one of these colors without revealing which color it is So let's say it's green So formally we basically just want to prove knowledge of an element small s Which belongs to the big set S which is public and And we want to prove knowledge of such such an element without revealing any information about Which element it is So the applications of the set membership proof can be Can be as follows. Well, it can be easily transformed into a one-out-of-many proof defined by growth and coal vice So this is type of proof where we want to prove That's one of the elements of the set is a commitment to zero and it there is a straightforward application to ring signatures because Public keys especially in the latter setting the public keys can be thought of as commitments to zero and then from rigged signatures There are these applications to confidential transactions as well as electronic voting So in terms of lattice-based ring signatures Well the best scheme up to date Which follows that framework or this path from the previous slide is the scheme by asking at all Which has pretty nice? Signature sizes so even for large sizes like 2 to 21 uses this below 150 kilobytes then more recently at Asia Crypt 2020 we have the falafel scheme which Which proposes this new logarithmic size or proof and The the size for two to 21 uses is even less than 40 kilobytes so So yeah, so just want I just wanted to mention that asymptotically these two schemes are logarithmic in Size of the of the ring then we also have the raptor scheme Which as you can see for even for two to the twelve users It's five megabytes which kind of suggests that it is it scales linearly in the number of users and In this work, we propose a ring signature Let's base ring signature which follows the the path from the previous slide and we even for like big ring like 2 to 21 We achieve 22 around 22 kilobytes So what I wanted to mention is that this is really an active area of current research because there are in In this conference at crypto 2021. There are two more lattice based ring signatures. So the first one Proposes short signature sizes for for small rings of size between four and two thousand at the time of recording this This paper is not available on eprint yet but but the abstract suggests that That does the size scales linearly in the number of users and Then we also have the second the second construction of ring signature from plain LWE in the standard model So you just wanted to mention these two other papers So in terms of our contributions, I kind of explicitly mentioned them So the core part or the technical contribution of this paper is definitely the this new lattice based set membership proof which can then Be turned easily into a logarithmic size ring signature and then later on we follow the Matrix framework defined by Eskin et al at CCS 2019 and We construct a Monero like confidential transaction system. So in terms of the proof size We reduce it by a factor of four to ten times but But please look at the the paper for more details about about the confidential transaction system Because in this presentation, we will focus on this technical contribution, which is this new lattice based sub membership proof Okay, so let's start with with our approach so So, yeah, so let's go back to our interesting example So suppose we have the green sock and we want to prove that it is it belongs to that set S So what we do at the beginning is that well, we put all these All these elements of the set S into a matrix So in this case, we have four columns the matrix with four columns And then we can define the index vector V in this here is 0 1 0 0 such that this matrix Multiply by this vector 0 1 0 0 is equal to this green sock that we have So yeah, so more formally we have this matrix which which each column corresponds to an element of the set times the index vector where It is 1 in the I position. So suppose we have this element si and It is 0 in all the other entries and then we end up with this matrix times the index vector is equal to si So for notational purposes, let this index vector be V and we call this public matrix P So equivalently, we basically write it as P times V equal to si Right. So the naive solution can be thought of as follows. So This solution is I am not saying that it's completely bad because it might be good for small Ring sizes like 32. Let's say But let's see. We will see this. It's it's disadvantages later on Okay, so we first commit to V and the si And then we prove that PV is equal to si. So this is this equation the previous equation, right? so As you can see this equation It is basically a simple linear relation, right because we commit to V. We commit to si and this is just a linear relation between the committed messages So I will call it linear relation But then we also have to prove that V is well-defined. So what does it mean? It means that we have to prove that it has the zero one entries and It has exactly one one And we do it as follows. We prove that V has binary zero one coefficients Which can be what which which can be thought of as V times V minus one is equal to zero Where this multiplication is the component-wise multiplication Improving this equation is just a multiplicative relation on the committed messages because V is committed again And then to prove that V has exactly one one. We prove this simple linear relation meaning Well, the matrix which consists of one row of ones times V must be equal to one and Now to prove all these linear and multiplicative relations We can apply this that I will call it the lanes framework so the the names well, so the each letter corresponds to an author of Of paper which contributes to the framework with and it starts by the paper by Atema et al at crypto 2020 They so this framework allows us to prove efficiently in terms of concrete sizes Proving linear and multiplicative relations so So what's wrong with this? Well, what's well what are the weaknesses of this approach? Well, we will need to commit to the whole vector V of length n So if we consider big n that will big size well sets of length or size, let's say 2 to the 20 then this method does not really work well in terms of efficiency and Well by applying the lanes framework We end up with actually the big goal of n proof size basically linear Proof size and then yeah, so as I said if we if we consider like big Big sets or later on like ring signatures with 2 to the 20 users then this approach basically would suck So how to make it more efficient, let's say So in order to achieve logarithmic size we applied the the trick which was used in made previous works starting from growth at coal coal vice and Butto et al So suppose that n which is the size of the set is equal to l to dm and Then there is this observation that the vector V this index vector V Which has exactly one one and the rest are zeros can be uniquely Tensor decomposed into smaller vectors V1 V2 up to Vm where each VJ is binary and has exactly one one So each VJ is of length l and there are M of them so So first of all This operation is not very well defined. So what we mean is that? This is V1 tensor and then the the bracket and then V2 tensor and then the bracket V3 tensor and the bracket and so on up to Well at the end is going to be Vm minus one tensor Vm So let's just have quick examples So if we have zero zero one zero then we can write that zero one tensor one zero and A little bit more complicated example If we have the vector zero zero one and then all zeros basically one is in the third position Then we can decompose it as one zero tensor and then the brackets zero one tensor one zero so Basically the observation is that every every vector Every such vector V every index vector which satisfies these conditions can be decomposed in this way Okay, so the solution the logarithmic size solution can be can be described as follows So we commit to V1 up to Vm. So we commit to these smaller decompositions. There are M of them and Then what as well as si? And then instead of proving PV is equal to si We have to prove that P of this tensor product of V1 V2 up to Vm is equal to si and This is definitely not a linear relation Next we have to prove that each Vj is well defined. So Well, which is basically similar as as before so we have to prove that Vj has binary zero one coefficients Which means Vj times Vj minus one is zero, which is just a multiplicative relation and then We want to prove that Vj has exactly one one and we do that by proving that What well that's the matrix which consists of one row of ones times Vj is equal to one So as you can see this This solution is much better than the previous one because we only Well, we commit to the smaller vectors Vj's and not to the whole V and Well, hopefully I can convince you that it's more successful Solution but the last thing but the one thing to describe is well How to actually prove this type of equation because it is not a linear equation. So Okay, so let's Let's have a look at the intuition. So in order to prove that type of equation So P times so the matrix times the tensor product equal to something we will use the following two facts So first of all We need to prove that the inner product of PV. Well, I mean, there's nothing to prove what I mean This is a simple observation we want to The first observation is that the inner product of PV and W for vectors V and W is Equal to the vector into the inner product of V and then P transpose W So this is a very simple observation, but this is a key component of of proving linear relations in the in the framework in the lanes framework and Yeah, especially in this paper by asking at all at Europe at Asia Crypt 2020 and Then the second fact is that if you if we have the vectors UV Then the inner product of this tensor product of you and V with W can be written as follows So the first step is Just the definition of this tensor product is you want times V then you two times V and then you L times V This W which has length L squared in this particular example can be written as Can be split into smaller vectors W 1 which has length L W 2 which has length L up to W L which has length L Okay, so this first step is pretty clear and then from this from this step to well, yeah from To explain this equality. We have to notice that We use this observation or basically it's a simple fact that the inner product of you want times V and W 1 is you want you want is a constant with the number is you want times the inner product of V and W 1 so that's why we we can Take you want you to up to you L on the one side and we end up with the inner product of you want you to up to you L and With the vector which which is equal to the inner product of V W 1 Then inner product of V and W 2 and then the inner product of V and W L so Actually, there is not much magic going on here. It's just just Calculation and just moving around all these vectors and constants Okay, so if when we are in this step Then well, I can just write it as the U then the vector U And then I can write this vector as the matrix vector product where well where the matrix is This matrix where the rows are V 1 V 2 up to V L Oh, sorry, and then and this matrix is multiplied by the vector V So let me call this matrix W. So at the end we end up with The inner product of you tensor V with W equal to inner product of you and this matrix W times V Okay, so we will use these two facts so So the intuition is as follows for simplicity suppose that this That the set S consists of vectors over Z Q So let's say these vectors have length K And you know in order to prove that to prove this equation so P times V 1 tensor V 2 up to V m is equal to si We prove that for a random challenge vector picked by the verifier We prove that the inner product of P times the tensor product of these V i's V j's minus si With with gamma which will be the challenge vector is equal to zero okay, so So how to prove that how to prove that this inner product is zero. So first of all we Well, I don't know the first thing is to split this inner product So this inner product can be written as the inner product of P times that this tensor product with gamma minus the inner product of si and gamma and Now using the the fact one We can take this matrix P to the other side by taking the transpose so in the end we end up with the inner product of the The V 1 tensor V 2 up to V m With P transpose gamma Minus the inner product of si and gamma. So this is this part which comes from the previous equation and Then in order to use the fact 2 we notice that okay this this vector here can be written as V 1 tensor and then The the the large vector which is V 2 tensor V 3 up to V m So we use the fact 2 here to to write this equation as The inner product of V 1 With W which which will we will discuss in a moment W times This second vector which is V 2 tensor V 3 tensor up to V m Okay, so what does W so the vector W here Corresponds to this vector P transpose gamma and then in order to and then we just apply fact 2 So W is it's a matrix Where the rows are W 1 W 2 up to W L And then we also need to take care of this second term, which is the inner product of si and gamma Okay, I just moved the the equations to the top so by applying these two facts The next thing to do is we commit to the X Which is equal to W times this tensor product V 2 V 3 up to V m So it is this is this term here and What we do is we prove the following two things so first of all we want to prove that X is well-defined So X is equal to W times V 2 tensor V 3 up to V m and then the second thing which is kind of the thing We want to prove from the beginning is that we want to prove that the inner product is zero So as you can see this term here is Indeed equal to this inner product that we want to prove that it is zero Okay, so to prove the second part We we simply can apply the lanes framework because as you can see V 1 V 1 is committed X is committed si is committed and these types of equations are This is kind of a linear slash multiplicative relation and it can be proven using the lanes framework but now the first the first part is more interesting because Yeah, it is not a linear relation still But it is kind of the equation this equation. It is of the same form as the one we started with But the main difference is that We have we commit to X, right? We have the public matrix W And we have these tensor decompositions V 2 up to V m But there are only n minus 1 vectors here Which suggests that we can to prove this relation We can run run the same reasoning as before so we have this recursive argument and Then we can just run it Recursively and then at some point with n minus 1 recursions. We will end up with the situation where we commit to X and We want to prove that X is equal to some W some different W times just the VM and This this equation in the end will be a linear relation and then we can apply this lanes framework Right so One thing to mention is that in each step we commit to this X, right? And one might ask if committing to one more X in every round or in every recursion step is Costly so as you can see in this step Yeah, even in this first step that matrix W has exactly L rows Because each row is W 1 well each row the first row is W 1 W 2 up to W L So this implies that the mate the vector X will only have the length L which will be small L is this base just to recall So it doesn't really cost much, but yet we commit to every X in each In each step so there are a few details which are Kind of hidden because of the for the presentation purposes So first of all if this Equation doesn't hold if P times V 1 up to the M is not equal to si Then the inner product that we wanted to prove is zero and this inner product is zero with probability 1 over Q so and our settings we will work with Q being Around 2 to the 32 Which is not very negligible. So this probability is not negligible, right? so we We we we work over the extension fields to To boost the soundness and achieve negligible soundness error So this is one thing to mention one common and the second thing which is kind of the the main drawback of our Construction is that the verifier runtime is linear in the size of the set and and this is clear just by Looking at the this intuition from in the previous slide is and when we want to compute this matrix w Well in order to compute this matrix w we have to compute this vector, right? and in all in order to come to compute this vector we have to The verifier will have to calculate P transpose times gamma and And P has Well P has exactly n columns so that's why the verifier runtime will be linear and Yeah, so So if even though we have small signature sizes for like 2 to 21 users The the runtime might not be that great Okay, so thank you very much for watching this concludes my talk and here is the link to the full version of the paper Yeah, thanks