 Hi, all. My name is Alanta Fabico, and today I'm going to present a joint work with Gala Raffold, that is called an algebraic framework for updatable universal snacks. So since the introduction of interactive proof systems in 1989, a huge line of research has rolled us to what is nowadays most efficient construction of a snack due to rot in 2016. And when I talk about efficiency, I mean, we have a proof system where a prover wants to convince a verifier that some relation holds. And we are going to consider in this case the work the prover has to do in order to convince a verifier the amount of information they have to send back and forward, and the work of the verifier as well. So this almost optimal construction, there is a drawback that is that in order to achieve such samsiness. The program if I both need to share some information that contains some description of the relation, and this information also has a secret. So whenever we talk about the secret we talk about a trusted party. Thinking on the applications that snarks have nowadays this condition is highly undesirable. So the community has been working on solutions as multi party computation which would be the most natural one. This still has its own drawbacks. So in a multi party computation model, many parties collaborate in order to create a secret that none of them knows completely. So they send messages back and forward and at the end of this interaction. They have one is a race that stands for structure reference ring and is this information I mentioned before that both primary have a verifier have access to. But this is a race is specific for one relation. So this quite expensive computation has to be performed for for every time we we need to use it for for a particular relation. In 19 growth at all introduce an alternative to this model that is the updateable model similar to multi party computation. There are many parties that collaborate in order to create this srs. But in this case, they don't do it interactively, but they act one after the other, and compute its part of the srs in a very file manner. So one of these of this computation is an srs that we were going to call universal. Why because it will work for any relation up to some size. And then from that universal srs, we can derivate in an untrusted a step. So this is a relation dependent srs. Right, so we start from these, this universal srs this that contains the secrets, and then construct descriptions of relations from that, since the appearance of these seminal work. The constructions of snarks that use updates one universal srs have been done. And then I think that all of them share some common principle, which consists in breaking the construction of the snark into step. First, we work in the in an information theoretical object for its security, and then by using a cryptographic assumptions, finally compile it into a snark. And then the theoretical object is what we call an holographic proof, and I'm going to talk about that in a second. And the cryptographic compiler is done using a polynomial component. All the cons, the, the constructions that we, we are aware of in, in the update on universal model follow this blueprint. I hope we are not forgetting. So what is an holographic proof. I'm here, I'm going to use the notion introducing Luna, but with different flavors, flavors, and these come, they are calling different ways in previous work. So we have as always, the prover that is going to be played by the parent today, and a very fire, the voter. But also we have a third, a third entity, which is the indexer played by the you want to, and the indexer is going to output polynomials that describe the relation. So we can prove and verify we interact, prove our messages will include polynomials as well, and the very fire, rather than having to read these polynomials, we have access, or access to it. So it can query them at an arbitrary points of their choice and perform degree checks among others. So we want to construct a polynomial holographic proof for brewing general relations. What is the motivation of these of this work this work is, it's meant to break this information theoretical object, a bit more. So first of all, to extract the main ideas of all these very interesting constructions for two things. First of all, we want to compare we want to see what is that they have in common, what is that they, their differences, what are the differences because what they have in common can be a bit more standard, their difference may work different in a different manner for, for a specific relation so maybe we can why not combine them. And of course, at the end, the final goal is to, to improve right to, to get a most efficient more efficient constructions. Let's start from the beginning, and we want to prove general computations that we can model as entry problems. And in this talk, I'm going to consider circuit satisfiability. Why because it has a very, like, and a very nice representation. In the binary metric circuit have three types of gates, we have multiplicative gates, additive gates and gates or inputs gets multiplied by constant. And for the first one, we capture them in a set of quadrat constraints. So if we can label all multiplicative gates from one to M, what we are going to require is that the left input of gate I times the right input equals the output of the gate. So we label with a, the left inputs would be the right inputs and we see the outputs. Then to capture a both additive and multiplicative by by a constant gate. We use linear constraints. So, every input will depend on previous outputs and some coefficients that describe the, the circuit itself. So if you remember what to say about the SRS, the intuition is that quadratic constraints are the same. For every circuit of size and every circuit that has a multiplicative state, we have the same set of quality constraints, but linear constraints include some a constant that describe a circuit. So this is why we need to derivate the universal SRS into a specific one to generate a sanctioned description of the circuit, because these constants are not something general. So algebraically, the pro wants to commit the verifier that there is an assignment that satisfies a specific circuit. So what the pro wants to show is that for some vectors ABC of size M, and given matrices F and G that describe a circuit. So these things happen. The first one is that they have a product between a and B equals C. So these entry wise and product includes all the quadratic constraints, and then that summary, some linear relation holds between matrix F and G, and the vector of of the basically what the pro wants to prove is that a B and C are in the orthogonal space to this matrix W. Generated by the rows of this matrix W. Why because if if you see what this matrix, this equation is saying is a matrix vector probe is that element I of vector a equals a linear combination between vector C. So this is a sequence of row I, of matrix F, and a similar for B and G. So let's see about this, a bit more in detail. How can we prove that a, the vector ABC is in this space. Well, we could take every row of W and perform a linear approach with a vector of the witness and check that all of them are still. required to prove to M relations. And we are pursuing something else so this is a far from from optimal, but thinking in the right manner what we can do instead of checking one vector against every generator of a subspace, we can sample one random matrix in that space by using a random coefficients, and then just check one in a probe. So, our prover wants to convince our verifier the circuit is satisfied. It has to prove the hammer proved. It has to prove also the inner probe equals zero between ABC and some random. So in the work of vectors and matrices which is nice and intuitions came very fast, but then we have to move to the world of polynomials. So for that, we are going to define a set H of size M where M is a is a number of multiplicative gates inside our field, and then define the, the large polynomials. H i is a polynomial of degree M minus one that vanishes at every point in H, but a tie where it takes value one and T x is a polynomial that vanishes at every point in age, and there are words as R J, it's supposed to say, H J. So if we have a natural encoding, we take a vector of size M, and then we write the linear combination between this, this element of this vector, and the Lagrange polynomials. The output is a polynomial why that when we evaluate it in h i give us the element of a Y. And so first of all, we have the intuition of how to prove silky satisfiability in the algebraic war. We have the tool to move to polynomial and now let's wrap up a bit. What do we need. So, we need to sample very primary fire need to sample this vector D, and then compute the same coding as, as polynomial. And then, from the encodings of the vectors of the witness and the encodings of of the vectors and the, the generate D. And then the points have to be proven. First of all, that they have approach between and people see, and there is a pretty standard way to do it. When we work with a branch polynomials, we can write it as a divisibility problem. So for example, here, the poor will send polynomials ABC and H H one. The very fire has to check that basically a times B minus C is divisible by the vanishing polynomial. And then also the poor has to convince a very fire that a inner product between these random vector D, and the vector of the witness is you. And a scheme for this inner product relation in our paper, I'm not going to get into the details, but it has a very similar structure to the to have an approach scheme. So, the poor wants to convince a very fire of this, and we already know how to perform hammer folks and in approach, how the poor can show to the very fire that these two relations are satisfied so the quality relations. And the inner relation, the linear relation is divided into steps. One of them we have a core which is the inner product. So we have to focus on how to sample this vector D and compute the encodings. This to this last step is what we are going to call a checkable something. So the poor has to sample is vector D because we cannot ask the very fire to do it you will take a linear time. And then it has to prove it to the very fire prove the correctness of the sample. So to sample we need a vector of size to M. And because the, the prover is trying to convince a very fire the prover kind of choose this coefficient it kind of choose the vector in, in the row space of w that is going to check against the witness. So the prover, we don't want it to send to M fill elements. So we're, we saw this problem by including the description of the relation, some vector of polynomials. These vector polynomials you can think it as the monomial basis, or the branch, the Lagrange polynomials for a for a set of size to M. So the prover is going to evaluate this vector of polynomials in one element. So we will get to M on the coefficients. But the point of evaluation will be sent by the very fire. So the fire sense, just one element. We are going to use it to evaluate to end polynomials and generate the randomness that is going to use for the linear combination, and then it performs the sampling itself. Now, next step, how do we get the encoding of vector D, we already know how to find vector D. The encoding of vector D what we want is something like that looks like this, because D has size 3M, we need the Lagrange polynomials that interpolates and set of size 3M as well. And our encoding is of this form. But what is D? What is vector D? What are the elements of vector D? Well, vector D is a linear combination between the rows of W and this alphas. So we batch all these rows in just one vector, and then we will use the Lagrange polynomials to batch the columns. So to batch the elements of vector D. But at the same time, these alpha coefficients are evaluations of polynomials in some point. Why? And if you think this is the natural encoding of a matrix, it's like we compress all the rows with some set of polynomials, all the columns with another set of polynomials. And then in order to recover an element in W, we basically play some novel. So D, at the end, is a partial evaluation of a vector, sorry, of a polynomial that describes matrix W, a polynomial that naturally encodes it. This may seem as already a solution. But here we have a problem and here is where we need to focus. And this is the bottleneck of all constructions. This vector W has two variables with independent degree M. If the modifier cannot evaluate it, it will take quadratic work and we don't want it even to be linear. And we cannot include it in the relation dependent SRS because we will need from the universal SRS to be quadratic to include all the combination of powers between X and Y. The goal is to find a way that the verifier can partially evaluate this polynomial and prove its correctness. Or maybe not partially evaluated, maybe evaluated in two variables sent by the verifier, but then prove the correct evaluation. And here W is dense and it has a quadratic number of, a quadratic in M, a number of non-zero elements. This could get super-trick. But there are some assumptions we can make in the shape of W and previous work made them and they are super-fair assumptions, we are going to talk about that in a while. So to prove circuit satisfiability, we will start from a checkable subspace sampling. It has a structure of an algebraic holographic proof. And then recall we add the inner product relation in order to prove the linear constraints and then we add the Hadamard product relation in order to finally prove circuit satisfiability. So this is how we break the information theoretical object, but for time constraints, I'm going to focus only in the first pre-mitted, the checkable subspace sampling which is our main contribution. So, because it has the structure of an algebraic proof, polynomial algebraic proof, a polynomial holographic proof, sorry. We have the indexer that performs in an offline phase, some computation to output polynomials that describe matrix W. Then in an online phase, proven verifier will interact in two steps. The first one is to sample. So the verifier will send the challenge Y. And the prover will reply with encoding of the vector D. It's a sample according to Y. And then the prover has to convince the verifier that the sampling has been performed correctly. At the end, the verifier accepts or rejects depending on whether DX has been correctly completed. So, again, this is the first step to construct a snark and update on universal snark. And it's implicit in all previous constructions. That's the core of this work. So, in Sonic, it's called, it's implicit in the signature of correct computation. The partial evaluation of this V-variate polynomial S is indeed a sampling in the rows of the matrix that describes the circuit. Then they present two constructions. In the Samsung construction, they assume that W can be written as a sum of permutation matrix, which is a variable assumption. And the complexity depends on how many matrices we needed, how many permutation matrices we need to compute W. And then they present an amortized takeables as phase sampling that is unrestricted, makes no assumptions on the size, on the, on the structure of W, and it's super efficient. In Malin and Lunar, they use a very, a very smart encoding of sparse matrices that is presented in Malin, which is, again, a fair assumption because what they assume is that the amount of non-sewn elements is actually linear in the size of the circuit. And the, the protocols are super efficient, but they have quite a large srs. In our work, we first start from this algebraic intuition and create a CSS that is inspired in the Bender-Modern assembly, which is not very efficient by itself, but it works very well with dense rows. So because this is linear algebra, we want to sample a vector in the row space of a matrix. So we can use one CSS, for example, to, to work with the dense rows and then if the rest of the matrix is sparse, why not to use Malin or Lunar. We also present a construction that is inspired by Malin but is generated from simpler building blocks and comes out to be, to be as efficient as the best construction in Lunar, but with a, with a smaller srs. And then in an extended version of our work, while trying to include Planck in our, in our framework, we, we combine somewhat the ideas of Lunar and Planck and came out with a, with our best construction that considers circuits with limited fan out. Again, after, after assumption in the applications of snacks. Why to use CSS, why to think this information theoretical object, this holographic proof in, in this way, because this composing the construction of a scheme in many steps is always useful. Also, it comes with, with an algebraic intuition that we, I think we all feel comfortable working with, like it simplifies. And then the framework itself cut towards several constructions, I would say, all of them in the, in the extended version of our work. And this CSS is the bottleneck is what all these constructions differ on. The smart design idea came on the table. So, it's all 18 it will allow us to, again, compare, combine, and then improve, we know, we know where to focus. And as I mentioned before, then we can mix these, these works. So that's all from me. Thank you for listening. I hope this was useful. If it was not, please don't hesitate to contact us or if you have any doubts on, on our, of our results.