 Thank you, Anka. So in a second, can you see my screen? So as Anka mentioned before, I'm going to talk on the efficiency and flexibility of linear map vector commitments. This is a result of joint work with Matteo Campanelli and Anka from Portugal Labs and color raffles from University of Montpufaba. So the motivation of our work is to study to develop vector commitment schemes in the setting of proof of space. To start with, we need to find our definition, which frame model we're going to work on. And for that, and for the reasons that Anka explained before, we would like a definition that includes individual openings, subset openings, but we also want to be able to perform some computation on a committed data. So we want to be able to perform, or we want functional vector commitments, we want to be able to perform computations on the vectors that are inside the commitments. For this, we choose the framework of Laia Malavolta of linear map commitments presented in the work in 2012 2019. And also because of the reasons that Anka and Aria have explained before, we want to add updateability and aggregation to our linear map commitments. So what's our, well, a bit of a map, I'm going to start with the definitions, I will explain to you our constructions, and then some efficiency results we got for the special case of proof of space. So what is a linear map commitment linear map vector commitment in our case. It's a scheme where we can apply linear maps to our vector. And these linear maps will take us input our vector in of size M, and will give us a vector of size M. And because we are not working just with vectors we are working with commitments we live in the thanks in war will want to start from a commitment to our vector be and get a commitment to the function map, the linear map in in our commitment in our So, if you, I mean it's easy to see that if when n equals one, these linear functions can be represented by vectors. In this vector F, we have every one of the coefficients of F, the function F as elements, and for the performing an evaluation of F in B is this is basically performing an inner product between the vector represents function F, and our vector B. For the case of, for the case of an arbitrary and what we are doing is something similar but instead of doing an inner product we are doing a matrix vector product. So the matrix which, whose roles are the independent functions corresponding to each element of the, of the output. The case n equals one is the one that will capture individual openings. So we can set F to be the function that when represented as a vector is the canonical vector J. And doing the inner product or evaluating this function in B will give us the position J. For the case of subset openings. We will need to move to the case of a, of a bigger and an end that is the size of the subset. And basically what we will do is opening independently, each of the positions in our subset, and we get the subset opening as a vector. We already have our definition that of linear map vector commitments that as I say, and gloves individual opening subset openings and also like evaluating functions in our. You know, I will go a bit fast in the ability because it has already been explained by, by Dario, but the idea is that provers will have to store something in the proof of space, a commitment to the vector be the digest, but also they may have proofs. In the case of proof of space they will be, this will be like individual proof, but we want to be a bit more general. So B F one here is a proof of the opening of the evaluated in function F one. When we update our vector and it changed, let's say in one position by some factor delta, we want to be able to take the previous commitment to be and become an obtain a commitment to be prime to our new vector without having to compute it from scratch. And we will want the same for for these pre computed openings, we want to update them, we don't want to compute them from scratch. For aggregation of proofs, we will consider a bigger spectrum. But in mind that aggregating proofs of vector commitments in the setting that we are working is aggregating polynomial equations, because we are working with with vectors encoded as polynomials. This aggregation can can be same commitment or cross commitment. Where some government aggregation allow us to aggregate openings of different functions, evaluate it in the same vector. So the goal will be to have only one proof that can prove both openings. In the case of cross commitment aggregation, we want to aggregate proofs that correspond to different vectors and may or may not correspond to the same function. Again, the goal would be to have only just one proof that convince is a purifier of all this time. Independently from this, we have the aggregation can be one hop or unbounded in one hop aggregation, we can only aggregate fresh proofs, and then we have to stop. So once we are ready to prove that's every that's all we can do. And in unbounded aggregation, we can aggregate fresh proof, but we can also regulate already aggregated proofs. Why do we call this unbounded and not incremental, because in incremental aggregation, the order doesn't matter. And for us for a result, the order of the aggregations to matter, we will see that later. On the other hand, we distinguish between two kind of aggregations when talking about some vector openings or individual position openings. So this is not applicable to functions. But we can have an aggregation that is either native or non native. What do we mean in a native aggregation of openings to subsets. The aggregation looks exactly as a fresh opening for the union of the subsets. So in non native aggregation, the aggregated proof still proves, like the opening of the union, but the proof usually involves some randomness, it doesn't look like a fresh proof. So it's different. So this is what we want to achieve. We want our linear map vector commitments to be updatable and to be aggregatable. What we can do our result is for some commitment and cross commitment and bounded aggregation, but not not native. And native is because you can apply other techniques to our results, but our result is a non native and bounded aggregation for both cases and commitments and cross commitments. So, we want all of these. And our starting point is very nice constructions. We want to work in the, in the setting because you know we like constant stuff and constant work people. And in particular, when we start analyzing and working on this, we, we found these two very well we didn't find them, they are quite popular, but there are these very nice work. One is called aggregatable sub vector commitments for state less cryptocurrencies from now on on Thomas with all. And they are point proof rating proofs for multiple vector commitments from now on on point books, and they do consider they are both very nice constructions they do consider a similar set in two hours. So they do consider the same properties. But you, you have trade offs. The Thomas with all construction does consider sub vector commitments. It has one hop and native irrigation, and also defines a protocol for a bit of it. In the case of bump rules. So, they consider sub vector commitments. They describe a one hop aggregation that is for same commitment and also cross commitment. But near of this work consider applying a factions to the committed vectors, or the unbounded aggregation. So, we start from here, and orientation was like this works could do more like we can like extract some more capabilities from them. And what we first notice is, okay, what, what do they have in common. They have in common that they are both omomorphic in their commitments and proofs. What is what does it mean that when, when we say that vector commitments give me some more free in the commitment. It means that whenever you're committing to a linear combination of two vectors or more vectors, what you get is the same linear combination, but of the commitment to the vector. On the other hand, omomorphic proofs is something quite similar. Whenever you open a vector to a linear combination of, of some linear maps, you get the linear combination of the individual openings of the vector with each of the maps. And the same with your combination of vectors to the same near map. So our first result, or the first one that I'm going to, to talk about here is that whenever you have a linear map vector commitment that has homomorphic proofs and commitments, then it's updatable and unbounded aggregate. So with our framework, a point proofs can be unbounded aggregatable. And can be unbounded aggregate table when giving up the native aggregation. And also point proofs is updated. I will explain a bit more about this now. So let's start with our constructions. We like when building this linear map vector commitments, we want to start from the simplest linear map, which is the one for the case and equal one. And in a broad term. So if you're going to meet some slide, this can be your best candidate. I'm going to explain the inner broke tournaments. I do think they are like quite simple. But if, like, it's not necessary for the rest of the talk that you pay attention to the details here. So this is one construction that uses committing key Lagrange polynomials, and another one that uses as commitment keys monomial basis, the monomial basis of the polynomials. And for the learning spaces we will start with a set of roots of unity. This is important cannot be any set. I will commit to our vector using these lambda eyes, which are the Lagrange interpolation polynomials. Using the same. We call that when we evaluate lambda I in some of the roots of unity, if the root of unity is the omega to the what to I, it will be one. Otherwise, it will be. So we multiply these two vectors. And what we have are all the cross terms. So we separate the ones where B and F share index from those where they don't. We have multiplication of Lagrange polynomials that do not share index those, and this multiplication is divisible by the vanishing polynomial of the set. And in the other one we have we have this lambda I to the square, and we can get rid of that power, also by dividing by the vanishing polynomial. So we get something like this. This is the vanishing polynomial, the product of all the terms x minus the elements of the set, and look that what we have that is not like, I don't know if you see my mouse but this that we have here. The polynomial is almost the inner product, we just not need to get to get rid of these Lagrangians, and because we are working in a set of roots of unity. This is very simple. The Lagrange polynomials evaluated in zero will be a constant that is the inverse of the size of the of the set. We have to divide by x to show the values mean zero. And now with some aesthetics, we get an argument for proving a claim value for the inner product between B and F starting from commitments to them. Then the I mean this is this inner product comes from a previous work with Carla, but the one in the monomial basis is a contribution of this work. We start with a commitment to vector B with a manual basis, the natural one, and we do commit to function F in reverse order. And then we multiply these polynomials and we get all the cross terms, we separate those were B and F share index from those where they don't. And in here we have it a bit more easy because the inner product is multiplied by X to M minus one. This doesn't depend on the on I so basically we already have the inner product. So now we just need to, to make sure that there is nothing else in the coefficient corresponding to the power and minus one. So we separate the rest into parts, everything that has degree M or bigger, and everything that has degree smaller than M minus one. Again some aesthetics, and we have our argument. Unfortunately, Thomas we told is, is a protocol in the in the large basis is similar to our, to our inner product. The only things that we add this functional functional ability functional property. And for the model basis. Again, it's quite similar to two point rules when you consider F as a function that element, but then the best thing about our, our construction is that we do not make assumptions on this arrest. In the same proofs, you need to be missing one power in this arrest. That, that could be a bit tricky when you want to like recycle existing srs. So we get rid of that assumption. And, again, we add functionality. So, starting for inner broke sermons, we are still with these linear maps that arrived to only one element, and not two vectors. So we can update the update. So, having an inner broke that has a more frequent commitment. And our committed vector that has changed. We can update the commitment basically by taking the previous commitment. And then lambda times a commitment to the canonical vector corresponding to the position that we're updating. So our commitment scheme is homomorphic. This, this works. And for updating the proofs. We need some help, we need an updateable keys. These are going to be all the openings of all the canonical vectors in all the functions represented by our canonical vectors. But because this is a monomorphic from this ES canonical vectors, we can reconstruct any function. At first sight, this may look quadratic to you. But for the case of the Lagrange basis, Thomas with all have a result where you can compute all these proofs from an srs or an updateable key of size to M. So that's covered. And for the monomial basis, these proofs are actually powers of x that are already in the srs. So, our monomial construction has an advantage that updateability is basically keyless. We don't need any, any help. Only, only the srs. And once we have all these openings, and again, this is general for any inner product that has homomorphic proofs. We can go from our old proof that f evaluated in B equals some element. We need to add a linear combination of all the openings in the updated position. Against the canonical vectors that we need to reconstruct f. So, this depends on the, on how we represent f as a vector. And we should. This works. This works because we are working with a monomorphic proofs. And for updateability, and for a multi aggregation, I think the intuition is quite simpler. We have two proofs that we want to, to, to aggregate. We have a monomorphic proof. So we just aggregate them asking a verifier to give us some challenge. Our new proof is a linear combination of the old proofs with the challenge of verifier, and then it has the size of just one proof. So essentially, to check this, the verifier will need to have access to this randomness for enough. She has chosen it. But in the case of aggregating already aggregated proofs. We, we can do the same we can ask a verifier for for some randomness. But the problem is that now, in order to verify, if I will need to have access to the fresh randomness that she has just chosen. But also, we need to have access to the previous randomness. So this is where where order comes to on the table. And to keep all this information we make a tree of aggregation challenges. This leads in the field is not a big overhead for the proofs. Right off, you can get a unbounded aggregation, but you need to add this extra tree of elements in the field to your proof. Now, we have inner products that are updatable, and that are unbounded aggregated aggregate aggregatable. What happens when we aggregate many inner products, we have all this, all this proof that these elements why equal the inner product of the functions if I Well, the vectors were present in the functions if I with the back. Once we aggregate them in our interactive non native way. What we get is actually a proof that all these vectors together. So we get is a proof, a picture commitment that takes vectors in M and gives vectors in M. What I'm saying here is that whenever you have an inner product argument that has some amorphous proofs and commitments, you can build. I mean, you already have linear map vector commitments for linear maps that take vectors of M and gives vectors of M for any arbitrary. And importantly, we have a bounded aggregation. About the ability is tricky because we have missed the homomorphic, the homomorphic property of our proofs when we start aggregating. It is possible to update with like already aggregated proofs but in the setting we are working doesn't make much sense. So we are working for precomputed stuff and aggregation is for responding to special opening so we didn't explore that that path and like my intuition or our intuition is that you can do it. It may not pay off. Okay, so we have, we have our framework for constructing linear vector commitments from inner products arguments. I'm going to talk about efficient results. Many like simple but non trivial observations to tackle the verifier in proof of space settings. So, as it was mentioned before, in the proof of space setting, the very fire may challenge the poor with openings, in order to check that is storing what the poor is claiming that is sorry. The subsets are not. I'm not something that we care of their shape. So, in, in the Lagrange basis. We have a native a subset opening that is due to Thomas Quetoll. And don't worry if you get lost here. We have a commitment to the vector to a sub vector. And basically what we do is comparing them in the positions that they are supposed to share. For the monomial basis, a subset opening is just a linear random combination of individual openings. And the important thing is that in both these proofs, we have a commitment to be been going to be. The openings, which are age or an age, depending on the setting. Then we have something that represents the openings, either the commitment to the vector, or the linear combination of the openings, which are bi in both equations, but we have the important part are these two pulling up. These two polynomials are the ones that describe the set. And the computation of these polynomials is something that the very fire cannot delegate, unless it has some trusted bird, because it's taking the challenge itself. But again, in profile space, we do not really care about what the subsets are, as soon as they are challenges for the very fire. These polynomials naively will take size of operations in the group for the very fire. And another question is like, can we do better, like, since we don't care about the subset I being something specific. Can we find special subsets that easy to open for the very fire where these polynomials are easy to compute. The answer is equivalence classes. And in subset I, we include only positions whose index are congruent like to each other in some modules. For example, I can choose all the positions that are congruent to zero modules for that are going to three modules seven, or all the positions that are congruent to one. So these are equivalent classes. And the vanishing polynomial of these of these equivalent classes in the Lagrange polynomial in the Lagrange setting is very easy to compute. If the size of the, of the subset is K, and this is a vanishing polynomial, which can be computed in constant time by the proer equivalence classes, when we are working with roots of units are cosets. And in case you are familiar with, with these vanishing polynomials for cosets are very, very efficient to compute. So this is very simple way. Sorry, sorry. And then for the case of the monomial basis. If we cheat a bit with a randomness, not too much just reordering the powers of the randomness, we can get a geometrical series. Again, the series can be computed in constant time by the very fire. You may have not buy powers of facts. So, actually what we have to do is like multiply all the, the equation, the polynomial equation that a very fair has to check times one minus gamma x to a K. But the taking taking away detail here is that there are some subsets whose opening can be checked in constant time in the group by verify. So this is one thing. We have very efficient subset opening. Now, and this is something that extends to polynomial equations in general. We can have a very efficient verifier when aggregating proofs, if we are willing to be up as a rest. We can actually not giving up a service but growing our service. So, let's say we have three polynomials be one be to be three. They all live in in FM. And naturally, if we can have now vectors of size 3M instead of M, we can just aggregate them one next to the other. Believe it or not, we can also do it this way. Intercalating. For those familiar with long, this is the way they already, but for obvious reasons, I will stick to this, to this case. So, the, the encoding of this new vector, it will be a polynomial that now has size 3M. So, my first n positions will encode vector BI, the positions from M to 2M minus one will encode P2, and the positions for X2M to X3M minus one will encode P3. This lives in again, it's a vector of three size 3M. So we will need an SRS of size 3M to commit to this, but then we can ask the modifier for a challenge and a challenge alpha and then replace the power XM with that. This will be like a partial opening of this polynomial, which will take constant time for the modifier to check. And what we will have is an aggregation as our normal interactive aggregation of the polynomials are called in B1, B2, and B3 in FM. So, what do we have? We have an SRS that we need, that we need a bigger SRS, I mean we need a bigger SRS, but after some constant work by the modifier, we can go back to FM to work with polynomials of degree M. So, these are two things of independent interest, like opening subsets in an efficient way, with consumptive fire, aggregation, polynomials, but we can merge them together and get multi-openings with consumptive fire. Again, this is a case of interest in the proof of state setting, we may have many vectors, why cannot we just open, like challenge all of them together, all the, all the digest together. How would this work? Well, I have an aggregation of vectors, I have a bigger vector that includes my individual ones, and let's say I open the special subset of elements that are congruent to five modules M. What I'm opening is the position of every one of those vectors with a consumptive fire. We can do this also with subsets. So, instead of opening something, like congruences modules M, I do a smaller modules, smaller modules, and I get subset openings of many vectors with a consumptive fire. I think that's almost it. I'm taking away details. We, to give guarantees for updateability and embedded aggregation, starting with homomorphic proofs and commitments. We give a framework for constructing linear map vector commitments from inner probe targets. We provide two linear map vector commitment constructions, starting from inner probes. And then we have a consumptive fire for subset challenges, aggregation proofs, and then multi-offing. Thank you very much. I hope to see you soon on a print. Hey, thank you Alansha for the great talk. Yeah, that was a nice balance between the practical like the theoretical contribution and some practical aspects. So, questions, comments. I have a question from value. What is the running time of the unbounded aggregation when you want to apply it to say and proofs in a three fashion and proofs in a what sorry, in a three fashion, the three fashion, you explained. No, I mean, the running time of the aggregation for the verifier is linear in the amount of proofs that you're aggregated, sorry, for the prover is a linear amount of exponentiation in the group, linear in the amount of proofs that you're aggregating. From the verifier as well, like, what do you really say is a proof size. It's not different from just like normal interactive aggregation. Does this answer the question, are you. Yes, yes. I have another question. You can, you can ask it. So, so everybody here. Yes, yeah. My question is, what's the difference between the inner product argument for monomial basis that you give, and the one from the Libert at all functional commitments that we may be better among you. Functional commitment. For the product. I don't know if I remember now deliver to tall construction. I can try to answer that one, because I was looking recently to compare with the construction. So that one is using another assumption and subgroup decisional problem in a composite order groups. Yes, that's less standard than pairing based assumption we have. And because of that, as I remember the concrete parameters where we're larger in their case for the inner product. And also, that's not sure I didn't check. It's not the claim as a contribution in the, in the paper like the homomorphic groups of homomorphic commitments, neither of the ability of aggregation so we are not sure that this properties also apply to their scheme. Well, I mean, that scheme can also be ported in prime order groups and it's actually the, the scheme by line Malavolta, if you drop the multiple inner product part is it's actually that scheme in primary groups. Okay, then, if we compare with lion Malavolta we have that better public parameters, I think that's where we, we perform better. We have a sub vector opening in their paper, which requires n square public parameter where L is the size of the vector. While also their linear map construction requires n times m where those are the dimensions for for the two sets. And if nobody asked questions, I haven't. No, it was a nice. So one thing that I did I missed when you start talking about the equivalence classes if I understand well you want to use these as a sort of optimize the random positions that you get in the proof of space that you have to open, or I mean, if you're going to ask the prover to open the vector in some sense it just to challenge him do it with subsets that you can check like fast. Because my question is okay then I got it correctly but then my question is why did this is randomly enough for security. The question would be, actually, if it is more like better than individual positions. Right, I mean it's as good as individual positions. That's that's easy to see but our question actually and this is something that we haven't like decide yet is whether this is better than having just challenging new positions. I think that's even broader right like, how, how is it trial between subsets and individual position is actually better to to challenge with subsets. And I mean, asymptotically maybe it's the same. Maybe we have like bigger spectrum of random things, but that it's not clear for us yet whether it makes a difference or not.