 Thank you Rosario. So I will be presenting our work with Carla Raffles, my supervisor about techniques and tradeoffs for vector commitments in the discrete log setting. So, so first, let me try to motivate why we care about the discrete log logarithm setting. So well, for one thing, it's a very well understood, well studied and widely used setting. It's more efficient than other settings, notably the pairing setting. And, well, the good thing is that improvements here kind of generalize easily so because there are so many so few tools we can, we can use like normally you have these tools in other settings so it makes sense to see what you can do there and you can only, and you can then get better things if you have some richer structure. So, but what is the problem so we have some limits. So, the limits in this setting is that we have a very, very little structure so essentially we only have, we're allowed only to do linear operations. So quadratic operations cannot be done efficiently. And because of that, to do more fancy stuff, such as proof aggregation, etc. We, we kind of need to resort to the random oracle model, which limits even further the algebraic structure because it kind of works like that it takes away the structure. So it makes things hard. So it's difficult to have constructions here and actually we will use some weaker requirements than the standard ones that Dario presented so we will require the size of the opening to be sublinear. So it's all logarithmic essentially in the, in the number of positions open. So because of the limitations we have, and actually because of these limitations we also consider trade offs that we cannot see. And we try to be as abstract as possible so we can use all these techniques in other settings as well more efficiently. So, okay, so our goal is only using clean algebraic and combinatorial properties and see where where this can go. And specifically in this talk, I will focus on two parts. So, so first I will represent some generic constructions, just using this combinatorial and algebraic properties and, well, we will discuss some instantiations and trade offs in the discrete logarithmic setting but you should bear in mind that while we discuss in the discrete log setting it's not limited to that so you can easily instance it in the pairing setting for, for example. And then we will discuss about aggregation, but we will look aggregation from different lens than, than it's what's done, what is usually looked at. So we define a new notion that we call aggregation with selective verification and we will argue why this can improve efficiency and we will demonstrate for a vector commitment constructions. There are very few preliminaries to start so first okay we'll go fast through this slide because Russell already explained it in the pairing setting so we just use a bit of notation for groups so we denote elements g to the x in inside brackets. So, for example, this is a difficult man turtle in this notation because it's equivalent to this and, as again, noted by Russell it's so in the discrete logarithm setting we can write a multi exponentiation using the inner product notation so this makes things cleaner and easier to work with. And then we will talk about commitments so for the whole talk you can think of Peterson commitments but I just want to stress out that our requirements for all the constructions is what is called algebraic commitment which is kind of a generalization. So this means that whenever you have this construction in whatever setting you can apply all all the stuff that we will see so essentially you, you have. You have Peterson commitment so you sample some random elements or in general some elements from a hard distribution, and you encode them in the group. And then committing is just a multi exponentiation or as we saw in an inner product of the vector you're committing with the with the commitment key and verification is canonical so if you're given. An opening of C is X you just compute the commitment and see if it matches to what you're given. And at this point I want to note that in this talk I for the sake of simplicity I omit any privacy property so we don't require the commitments to be hiding, although like everything naturally can be converted to have the hiding property because it's very easy so I'm just not presenting it here. Okay, and the final thing we will need is just a proof of knowledge of opening of an algebraic commitment. So what does this mean well if we have a commitment key and some commitment, somebody convinces us that it knows an opening of this commitment with respect to this key. Okay this is the language a bit more formally. Well, the language contains all the commitments that the witness is the opening that is consistent with this commitment. And, well, you can see this is as a membership in linear spaces. So essentially a prover will show us that it knows coefficients x that it can produce a linear combination of the commitment key to the commitment. So this is an equivalent way to see this. And well, we have two efficient constructions in the discrete log setting. When I say efficient, I mean sub linear precise. So this is the minimum requirement we would want. So we have the folding technique of bootlet darling improved in bulletproofs. And we have also has proof systems. I will not go in details on how they work but they will just give their efficiency properties. So, first of all, the folding technique is publicly verifiable. And actually, it's also transparent. So that's why the double check here. So you don't need a structure the service or secrets to instantiate it. So the linear sizes are so a commitment key essentially, and proving is linear verifying is linear. So this is a big issue with with the scheme so because it is really nice to have a transparent construction but you have to live with a linear verifier. And the proof is a logarithm. And for the fast proof systems we have a designated verifier construction proof of knowledge. Again, the SRS and the proof are linear. But now we have a very fast verification property so the time to verify and the size of the proof are constant a constant number of group elements. And we can stress out that you can generalize it so there's a standard way to translate has proof system into a quasi adaptive music in the pairing setting so you keep the same properties but now you have public verifiability albeit with a trusted setup. Okay. So let's see what can we do for a vector commitments. We can eliminate as I use that the only things that we require so the only properties is that we have a standard commitment that is algebraic. And we have a proof of knowledge of opening for such a commitment. And let me demonstrate how you can use them to open some position so. Here we have a committed vector of five elements and we just want to open the fourth one so as we said the commitment can be seen as an inner product. So what do we do we will we kind of split the commitment so this is the green part is the part of the commitment, the verifiers knows and tries to verify so it can compute it on its own, like a commitment to X for with only using the variable, the commitment to R for. All of the rest. And well, all of the rest. What do I mean by that I mean the commitment to all the other variables in the same order but with commitment key that doesn't have X for. And well it is easy to see that if you combine these two parts, if you add them you get your initial commitment. And to prove that the fourth coordinate is correct. Well, we asked to the the prove it to give a proof of knowledge of opening of the red part. So that's just it and well intuitively this works because if we have a proof of knowledge for this part, we can extract all the other coordinates, and we have also one coordinate the export so we have an opening for the whole commitment. Well, if the prover could convince us for two different values, we would essentially extract two different openings for the same commitment, contradicting the binding property. Because that's the construction is quite simple so we essentially so we call it a proof of non membership because you saw that if you get the commitment and remove the part that you open well the other part is not contained in the, in the commitment key of the other part this is actually equivalent with membership in the coordinates of the rest of the of the value so we can use the constructions for members in linear spaces we will discuss about or any other construction for that matter. So, okay, that's that's the protocol well the prover computes the commitment to the other coordinates with the rest of the commitment key and just proves knowledge of it and the verified just verified so it can compute the same commitment by subtracting the claim to part. And, well, these works like I gave an intuition for soundness, and you can easily see that you can generalize this for for every subset so you don't have to restrict yourself to subsets of size, one as I presented in the previous slide. So you just give, if you want a bigger a subset. So you won't open more positions. Well, you just do essentially the same thing but here you subtract all the coordinates in S. This is a reminder of the properties we had for these constructions so, as I said we can use them but there are some issues so the first one involves only the fast proof system construction. So because we cannot reuse the srs and we should be able to open every variable this. This is bad because this means that we get a quadratic number of elements in the SRS and for its variable. And also it's it's not very flexible directly using this construction for for both. For both cases because, for example, you cannot pre compute proofs and get all the trade offs that this this implies because you would need. I mean, your troopers should be your troopers should be linear and you need a quadratic approval which is not feasible. So, okay. So what we do is we try to improve on that and we do that by applying the non membership technique recursively. And for those of you that are familiar with hyper cruise. This is kind of an abstraction of hyper cruise so we just use generic components, algebraic commitments and proof of knowledge to end and the same combinatorial structure to derive the construction so let's see so okay. We consider only opening one coordinate. So now the statement is a commitment and the commitment key and the claimed opening, and the prover has the witness X which is the whole opening of C and wants to convince us about this part. So what we do is we will give many such proofs of non membership so we work as follows so this is our initial commitment. We consider the first half variables and the second half, and we give the commitment to it with respect to the corresponding key so here, we only use the, the first half elements of the key and here we use the rest. So here because we don't want to open only these two sets we, we just continue recursively and split this in half and so on and so forth. Until we, until we reach the leaves of the tree, which each leaf contains just one element. So, okay, let's see how this would work. Let's focus on the ice element that we want to prove for so we just ask the prover to give the siblings, like in America fashion. We asked for the siblings and the proofs of openings of the siblings. So why does this work it's quite intuitive. Well, essentially, when you start down here with this, you can extract the parent opening. You can extract as well the sibling so you can extract the parent opening and, and so on and so forth, until you reach your, your claim and well, if it agrees with your claim and the prover hasn't cheated then. I haven't two different openings here would imply that again you get two different openings. So somewhere in the tree actually you get two different openings. So we'll get them here and this, this cannot happen because we require that the commitment is binding. So, that's the, the, the idea. And okay I described the, like the intuition for soundness so essentially you use the soundness of the one membership proof and the hybrid argument. And, well, why we care and we complicate things because we get some nice trade off so first of all, as we will see it's cheaper to compute all all the proofs in the tree than the n square we've had. And we can have time and memory traders. So what does this mean because when when you have a commitment this this trees fully defined. So you can if you want you can compute the whole tree beforehand or you can compute a part of it and save it and it's time you are asked something compute only the rest and you can do this. This kind of stuff. And also in, in the Hasbrook system in the Hasbrook system you reduce the size of the SRS, but okay you have to pay a price for this so you get bigger proofs. So because you need to, to take a logarithmic number of nodes in the tree you get more or less this, this over here so a logarithmic overhead. So let's, let's compare so here we have we first consider only the folding technique. And this is the recursive construction. So, what is the difference. Well, to compute all proofs, we just need the quasi linear time so and login time instead of n square. And, well, for this we pay that the proof size is log square now instead of login. And then I stress out that this is a transparent construction. And also like to update all proofs, you need linear time instead of quadratic, but I should make a note here that specifically in the case of the folding technique. You can use calling it an update because, well, you essentially need you can update in linear time but you need to know the whole opening so it's not enough to know some position or some proofs it's it's kind of an abuse. But still you can essentially what this column says is that you can compute the tree after you change something faster than than the trivial. Let's see the has proof system so now we also get a better SRS so we get an n log n SRS instead of a quadratic one. We have the same improvement in proving goal in constructing the whole tree. And now we pay the logarithmic overhead for proof and verification time, but we again improve in updating so now we need the logarithmic time instead of linear time and this is the classical notion of updating this case, because we have some some extra properties for this construction. So as far as the keys are concerned so I denote here within that for these constructions we need the end travels so we need a lot of designated verifier keys and this is undesirable. Why because normally you wouldn't want to simply use a designated verifier construction but to improve trust you would you would use a committee that produces the proofs and make some assumption about some threshold being honest and so on and you would want to be easy to manage the keys so maybe you want members of the committee to rotate or members going leaving others coming and so on so it's bad that we have this number of keys so but the good news is that we have an alternative to the first proof system construction which I will not have the time to present it in this talk. But, well, this construction has exactly the same properties but only logarithmic number of keys. And as I said this can be important in these committing settings where you have to handle a lot of a lot of keys. So that concludes the first part and now I want to talk about aggregation so we define a new notion which we call aggregation with selective verification but before going into that let me kind of give you the big picture so well what is aggregation and finally you have two statements and two proofs for the statements so here X1 and Taiwan, and you take the proofs and you construct a new proof for both statements. And well this is good because as I mentioned it can improve the efficiency so you have reduced the proof size and probably reduced verification. So let's take a different approach in this work so instead of aggregating proofs, we simply aggregate statements so what is the difference well, instead of combining two proofs and constructing a new one. What we do is we just take two statements, we have no proofs for them so far we first aggregate the statement to a new statement, and then prove the statement. Why do that because well in general producing proofs is expensive but as we will see statement aggregation can be extremely efficient so we can save a lot a lot of work for the proven. So let me give some motivation. This quick question is this like the difference between proving a sub vector directly and aggregating existing proofs. I can repeat I couldn't hear. So when you say statements do you just mean, for example, I know that position X1 is X1 and position two is X2, and I'm just building a sub vector proof directly. Is that what you mean. Yes, you combine the statements. Yes, exactly. And then you just prove the final statement. Thank you. But the statements. So this will become clear later but the later the statements and the aggregated statement should be of the exactly the same form and this as we will see we can exploit it. Thank you. You're welcome. Okay, let me give some motivation of the way we want to use that. The motivation is delegation of computation as a service. So what is the scenario here well, we have a prove it denoted here with a wizard. And the, the prove it just gives his computational resources to, to verifiers so people can ask, okay, can you do this computation for me and, and the prove it just does the computation but of course we assume no trust between the party so the prover should also convince the parties about correctness. So, what is the scenario case some party makes a question. It gives the result and some proof and, and so on and this continues for many parties, doing different computations and on, on different inputs so the prove it just it's a machine that does it for them. Okay, this, this is too much for the prover in concrete terms. So, let's boil it down and see what the prover does so well, it's clearly a job of the prover to compute the to make the computations that he's asked so this first part like making the computation is something. Okay, that it should be there but the problem is that for each of this computation it needs to produce a proof and as I mentioned like this is the bottleneck like this is more expensive than the computation itself and it can be very heavy in practice. So, what, what we try to do into improving that is, we have the prover just proving everything at once. So the new scenario goes as follows. So it's partly make some question to the prover and gets the response, but now it gets no proof. At the moment, the first party acts only on good faith that the prover is not sitting it has no guarantees. And this continues and when enough parties make, make their statements and get the results well it's time for the prover to to stand for himself and say okay I didn't see everything is honest. So what it does so the problem will just create one proof and convince all of the parts. So, essentially the prover aggregates all the statements it received, and it produces just one new statement. And, well, then it simply gives a proof for this statement which is the same for all parties to convince them. So it needs to give some small evidence that to each party that is different that essentially captured that he took into account its, its statement. And the important thing here is that we want to do that without the parties having to communicate with each other so they should only communicate with the prover. So, as I said, producing the proof so using an easy core, something like that the pie star proof. This is the expensive part so we want to do it as little as possible so ideally once for many statements. And for this thing to make sense, we should also require that this individualized groups are cheaper in any sense, like because otherwise the prover can just give the initial proofs and not aggregate and do not need to do that. So, we won't see individual proofs. And, okay, let's see what these two, if these requirements are satisfied what happens well, the more statement statements aggregated that cheaper is the prover because as we said, this part will be done only once and this is the bottleneck. And there's another good thing that a heavy part of the verifiers job will be to just checking the same proof so you can probably exploit that and gain something by this fact. Let's let's do some definitions and let's see them the construction. So, first, we need an aggregation scheme which is very similar to what is presented in the recent paper Nova. See the language and the relation and then the language and the corresponding relation. And you just need here to algorithm so the aggregating proving algorithm and the corresponding verification algorithm. Well, the former takes us into two statement witness pairs and just produce a new statement witness pair and the proof of aggregation. So, the verification, it takes the initial statements that are supposedly aggregated and the aggregated statement, and the proof and says okay I believe that everything was done correctly or I don't I don't believe so it's true or false. The requirements are quite natural so, first of all, we want completeness so if here we take two pairs that are in the relation what we produce is a pair that is as well in the relation, and the proof that that verifies so if everyone is honest everything works and we have the notion of knowledge on this which what it states in this aggregation is that well if this proof verifies so if aggregation was done correctly. And if we are given a witness for the aggregated statement. Then we get. We can safely assume that the prover knows witnesses for both the statement sex one and the extra. So let's reinforce that and let's add the selective verification property. So an aggregation scheme with selective verification is, again, it's first of all it's an aggregation scheme. So you have the same. Oh, and sorry, I should mention here that, well, here we aggregate two things but you can generalize it know you can you can just aggregate more statements at once so this is only for simplicity. And if we have an aggregation scheme that has like the verification it should, apart from the two algorithms we presented that the general ones should have two new ones. So, and the proof algorithm it takes a statement witness pairs and it produces one proof for its statement. And essentially this proof says that well, if you see the aggregated statement, I will, I have considered the statement XI to produce it. So it essentially proves that some statement is inside it's encoded in the aggregated statement. And maybe it's, it's helpful to consider it in some sense as a vector commitment, but instead of committing to values with somehow commit to statements. So, this thing kind of does that so it says that okay, this is the commitment and I reassured you that this was the highest position of the commitment is some statement XI. So what are the properties we want well, we won't select completeness which is again natural like if things are done honestly here, think we will verify. And now we have selective knowledge so if we have a proof that verifies and we haven't a witness for the aggregated statement, then we have a witness for the ice statement. Finally, we want some efficiency requirement because otherwise you have some trivial not interesting constructions. We want the each of these proofs to be sub linear in the number of aggregated statements. And this makes sense because I mean you should note that here, we do not even see the other statements and we don't want to see them because of efficiency but we only want to be able to focus in one statement and just care about that so it's a natural requirement requires a linearity in them. And well we can get using only combinatorial techniques we can get an aggregation scheme with selective verification just by using a standard aggregation scheme and the construction is quite natural. You have this trees structure so what you do is in the leaf you put all the statements you want to aggregate. And then you apply aggregation to pairs so you get the first two statements and you aggregate them and recall that you should get here a statement of exactly the same form. And you do that for every pair so after that you are with you end up with m over two statements, and then you just continue recursively. Okay, so you produce the tree and at some point you have this statement and this this is the final aggregated statement. And again about soundness so let's focus on the ice statement. So what happens here how would you be convinced. So the prover will give you the path to reach your element and will give you the siblings. And well again you can use a hybrid argument and what is the property you get. If you get a witness for the parent node then you should be able to get witnesses for the, for the children. So you can apply that and like start from the witness of this note, and move down the tree and get the witness for the statement of, of interest. So what are the properties. Well, the prover by simple argument like the prover does a linear number of aggregation and the verify just a logarithmic number of aggregation verification. And so, as I will argue this in general this is quite quite efficient. And the very job which is a snark is only done for the final statement. So, for this one. But also, you can know that as a very fire, you might get the final statement but you can easily postpone the proof and then aggregate it even further and I mean, use, use kind of tricks to even improve on the verify side as well. So we have everything let's try to apply it in vector commitments. So what are our statements in vector commitments, well, we have the statement is just a commitment, a commitment CI for the ice party that interacts with some prove it. The witness wants to open at some positions. I want to IK and the witness is the opening. Now there are many ways to do this right try to use here and a clean one not not probably the most efficient but still it, it kind of shows like what is underlying underlying ideas. So, what we will do is we choose to reduce it to an inner product so an inner product is a statement is a pair of commitments and some value and well, you claim that you know openings for these two commitments such that the inner product of these openings is Z. And how you do this reduction well, and if this is your commitment, like you have all these values and you want to open the green ones, then you construct another commitment B, and you just put zero everywhere and you put some random values here. And then you can construct the claim, the claim the inner, the inner product, because you know that everything here is zero so the inner product can be computed by the verified and then the proven just needs to convince about the relation. So we can attack the problem of opening vector commitments by proving aggregating inner product relations. So the next step is to show an aggregation scheme for a simple aggregation scheme for the inner products. And then we can use the genetic compiler to get the selective verification. So, can I ask a question. Yes, is this really aggregation or is it batching. Like if you were aggregation I would expect to see two proofs for position three and four that you combine into a proof for both of them but what I see here is you start with the full vector. So it feels like you start with the four vector and you build a batch proof for positions three and four. And this is, well, yes, you probably can sit like that but I just try to express in this slide that it's very easy to express the statement as an inner product statement. Okay, I just try to do that so then we can simply see how to aggregate in inner products that doesn't make sense. Okay, okay, let's keep going maybe we'll make sense there. Thanks. Yes, so, yes, thank you. So, here in step one I just because I don't want to aggregate directly this type of statements. I just saw that it's actually equivalent to only considering their product and this just demonstrates that it's essentially easy to talk about them in this way. Assuming this is okay then we can simply construct an aggregation scheme for inner products now we can forget about vector commitments. And then we can use the statement tree, the tree that they show in the previous slide to get the selective verification problem. So, let me demonstrate very fast how this, this is done. Well, the details are not very important and, as I said, there are probably more efficient ways to do it. So okay, you have statements of the form of inner product so you have two pairs of statements, and two pairs of witness one one for each and you want to construct a new statement, and along with a new witness. So basically what you do is, well, you're interested in inner product a one be one and they to be two. So, you send the other pairs. So as the prover you send a one be two and they to be one. And then, well, the, the, you receive a challenge from the verify the challenge X. Well, you, you kind of aggregated to witnesses using this, this challenge X. And the verifier does the same thing but it does it on the commitments, so it combines the commitments and it computes like using these values it computes the supposed new inner product. And this works I don't want to go into details. And, but I want to stress out an important thing. So it's notable that this is very extremely fast. So, essentially, the cost of the program is just a constant number. Not constant but yeah, and linear in the size of the witness number of computations in the field. So, essentially, you can think about it as the cost of doing that is not is actually comparable to reading the witness. So it should be much faster than producing proofs or doing anything more more fancy so this is the whole point this is how we improve on the efficiency so just instead of aggregating proofs if you have a great statement you get this type of of proving complexity like very, very close to just reading the statement. Okay, let's let's see an example like how, how would this translate in the vector commitments. Well, now, proving in different statements so you have in different commitments and some openings of them at different positions. Well, you would just use one one non interactive proof for one statement now only for the aggregated statement and you can use whichever proof you want for that. So the real aggregation it would involve a linear number of has function computations from the prover and the linear number of essentially doing in a product in the fields. So the has function comes because we translate the public one construction of the previous slide using the Fiat Samir transform. And the well what is the verification overhead. It's a logarithmic number of group operations and has computations. So basically, we can only get a heuristic security because we, we, we need the inherently the aggregation scheme to be non interactive and we are using the, the random order. So, okay, the take away from this is that for an arbitrary number of statements we can make proven proven life much easier while incurring some cost of the very fire. Okay, this was an application for specifically for vector commitments but this technique is not restricted to that like you can use it in much more general things so you can aggregate inner product you can aggregate polynomial commitment openings. The relaxed rank one CS construction of Nova I mentioned before. And, well, as I said this is the whole point that this is much faster than doing the music and some other applications. So you can use this idea to aggregate snark themselves. So if you want to do many snark computations. You can aggregate this type of computations. And you can also use for the aggregation style of Nova where, well, the difference is that Nova is completely very efficient but every statement should be for the same. As well here, you can even aggregate many snarks that consider different languages and so on. And probably many others so because seems kind of generic as a technique. So that's it. Thank you. And I'm happy to answer any questions. Thank you are there any questions. Yeah, can I ask something. So, well, it's maybe more like a common because I think I'm not sure if Aline was like had questions. So what I want my comment is that this can be thought of as a delayed that you're like postponing the proof so to say. So I think that maybe that's the way to think about it is like, maybe similar to this approach for doing recursive proof composition in the deluxe setting. Right. So maybe this clarifies where the paper stands, like where the results stand. And the other important difference is that when you aggregate statements but you do not care about all of them so you just care about your statement so you do not care where the other other statements come from you don't even need to read them. So this is the difference with the classical proof composition approaches. Yes, thanks for the comment. So could you go back to that slide where you had this hyper proofs like three. Yes, here, right. Yeah, yeah. So, and when you said you gave a proof the proof consisted of the sibling inner product arguments basically right inner product proofs of knowledge. And, and, and, right. Yeah. And so the proof is log squared and because each inner product argument is is log n, and you have log of them I suppose, and the verification time will be linear right. Yes, exactly. It's all the instantiation know like, if you. Yes. Yeah, if you use, could you use something else to make it nonlinear to make it logarithmic. Well, the house group system, well I think they have the slide so if you use the house proof system or impairings if you use the quasi adaptive music, you get a logarithmic proof. I see. Yeah, I didn't exactly understand how the hash proof stuff would work but that's really cool is there a reference for this. Like, yes. Well, you can, if you're familiar with membership in linear spaces proof. You can think of house proof system the same thing but with this designated very fire. But otherwise it's a very, I mean, it's a very classical construction. Cool and the aggregation that you mentioned like this new model of aggregation like if you go back to that tree image. I used it. No, no, no, just the original hyper kind of approach so so I couldn't understand if what you are proposing would allow you to take let's say two paths in this tree and aggregate them into a concise proof for the two positions. That's what I was having difficulty understanding. Well, the two, let's say the two parts are different now in the second part with aggregation I, I don't consider this. This way of doing the aggregation know it. So the other part we only have the statements we don't care how it's proven we just only care on how to aggregate. So the answer is you, you cannot use that technique to aggregate two paths in the street. It's a yeah it's a bit. So think of the other in the aggregation as there actually exists no tree because you try to aggregate before producing the proof because producing the trees expensive so you only have the statement. Like, I don't know if, if you have the final statement. Okay now here it's an inner product but in any case if you have a final statement and the statement is proving an opening of a vector commitment you can use the final proof using using this or whatever other thing. But I try to say this is connected it's not that they're related it's just how similar structure so yeah. I think so I think maybe with more work, you could potentially aggregate proofs in your first instruction but it's not clear that this technique can help you directly. As far as I can tell, that's interesting actually yeah. Yeah, yeah. Yeah. Cool. Really cool. Thank you. Thank you. So I have two questions. So, right so I, my first question is, is read to what Aline asked before and I want to make sure that I understand it correctly so what you call aggregation is more like a way to prove a lot of statements at the same time. And it's not about having already existing proofs. Yes, let's put it in the application of vector commitments it's not that if you have existing proofs for position one two and five and then now you can obtain the proof for that aggregation of the right. Yes, that's accurate like, yes. You don't want to actually do the whole motivation is to avoid to compute these proofs in the, in the first place. And then after like, when you have a lot of vector commitments that you want to open only then you give one proof. Okay, okay. And, but this, okay. If we think of the application of proof of space then you have to meet the cost is linear in the size of the vector right because of opening in which construction you're referring to. And if you know you even if you want to do this aggregation for many proofs. You have to run linearly in the size of the original vector, but now you sort of amortize the different groups. Not necessarily. And so this construction does not care about how you open the commitment so you can use whatever vector commitment or whatever inner product or polynomial I mean depends on the language, but you can prove this thing in whichever way and you inherit all the properties. And for this part you only care how you do the aggregation, there are no proofs at this point in the construction so you can have for example constant. You can use polynomial commitments you can have constant time for this in the polynomial commitment setting in the pairing setting sorry. Okay. Does it make sense. No, no, no, no, no, no, it's more clear. And they're very very fast question like if you can clarify the update all property that you mentioned in the table. And I mean, and how this can be log n. So, yes, so the essential in the fast proof systems that the proofs are homomorphic so you can change something and you can update the proof. So this is the dimension of data you had in your top. But, well, I kind of hear abuse things to show something here that the for for the folding technique, the proofs are not homomorphic so you need to compute them from scratch. But I try to say that even in this case, so you need the whole witness here to change the proofs, but I try to say that okay you still have some efficiency game. Okay. But update all means update, like all the proofs that you are. Right. Exactly. Yes. And in that case log n means like log n per proof log n word. No, no, in the second case it's. Yes, in the second case. It's log n in total. In the fast proof system you only need to, well, yeah, you only need to see in the past. Okay, nice. Thank you. Thanks a lot.