 So this is a compiler for efficient vector commitments, which I want to build a new vector commitment from existing vector commitment. So the motivation is from ASVC, which I want to improve the update or proofs algorithm, but it seems that it is very difficult. So I want to build another vector commitment based on the existing vector commitment to improve both update or aggregate. So now suppose we have a vector scheme vector commitment scheme, which has all log n time to open all proofs. Actually here log n could be relaxed to any anything like a poly log time. And we need all one time to update each single proof after receiving update request, which is feasible in many vector scheme and all one time to update the commitment, which is also very easy to achieve. And the most importantly, we need the vector scheme needs aggregation algorithm. But actually we don't care how they do the aggregation which doesn't need their existing aggregation algorithm. So now for our compiler at the beginning we have a vector and we can open the weekend commitment to to commitments and we have all the we can open it up to all the proofs. This is the initial state. And when we receive an update request, which means I want to update the position I want to data I want. We just put the request into update record and use all one time commitment update algorithm to update the commitment and update the vector. Just as a vector in our list. When we keep receiving other update requests, we keep updating the commitment we keep updating our vector and we record our record as a update request in the list. When the number of records which is square root of n, which means the record list is, is for all square root of records. So we use n log n time to open all the proofs. See, the proofs here are all updated at once, using the open all algorithm. And now we can clear the record list, and the things are all the proofs, all the single proofs are updated now. And if we need to get some single proof. We just need to extract the initial proof and apply the apply every record update record in the list, using the update the single proof algorithm which is all one, and to get to the single proof, the thing that the newest single proof pi j prime. So if the record list is at the most square root of n records so the get one single proof algorithm needs at the most square root of n time. Above all, we need amortized a log n, which is open all proofs over square root of n, which is square root of n log n time to do the update part. And we need at most square root of n time to get any single proof since our update single proof needs all one time. So for any index set I we need all I square root of n time to get each single proof and then we can do the aggregation, we don't care how the aggregation is. It's done but we can do it since we have the single proofs and then do the aggregation. And there's another simple optimization if I square root of n is over the log n, which means the set is very large, we can just choose to open all proofs. Instead of to get each single proof, which means that we don't need to consider to get each single proof one by one. So now we can try some amortization technique to improve the worst case. Since I want to amortize the n log n time algorithm, I want to do it separately. So the method is to extend the size of the record list to two square root of n. And if we have square root of n records, we can just divide the n log n time computation in to square root of n parts and we do the computation in the next square root of n updates. And if we have two square root of n records, we can clear the first square root of n records since the first n log n time open or algorithm is finished now and so in this case the record list is at most two square root of n and generally it has at least square root of n records. So if the number of the update record list reaches square root of n in this way, in general we want to use the n log n time to open all proofs but we don't do it right now to finish the whole algorithm. We want to do it in the next square root of updates. So we leave those updates records in the list, we cannot clear them since we haven't updated the single proofs yet. And when we then get any update request, we put it in the record list, we update the commitment, we update the vector and we do the first part of the opening all proofs which is this algorithm. And when we receive other requests, we keep doing the other parts of the n log n opening all proofs algorithm. And when the record list reaches two square root of n, actually we have already finished the whole, we have actually finished the algorithm here and now we can clear the first square root of n updates in the record list. And at this time we are now, we should prepare for the opening all proofs algorithm for the next square root of n parts. And now the single proofs is actually updated for the first square root of n parts and so we can clear the first square root of n updates in the list. And at any time we we want to get a single proof, we just need to extract the single proof here and apply those update requests in this new list and get a single proof. And actually, this amortized case can be improved to worst case. And actually we can also set a parameter here to deal with the situation if the aggregation set is always very large or always very small. It's just a balance of, it's just a trade off between update or an aggregated. And actually, we can see the table from hyper proofs and many of our vector commitment schemes, which has n log n or n log n square time to open all can be applied to our compiler and we can get a new vector scheme. And as long as it has n log n poly log time to open all and it has aggregation time aggregation algorithm. So that's all for this compiler. And actually, if we just use this great log assumption. I kind of believe that if we can improve this. I mean if we can improve this. Through this compiler we can actually improve the algorithm of FFT, which means FFT is something like this. Some something of convolution computation and the update part is actually arbitrarily updated some of the one function and the get one single proof is arbitrarily get some of the result. And if we can improve the our algorithm, our compiler, we can actually improve the method to update arbitrary of some list and I will get some any item of the result. It's just an intuitive guest, but I cannot approve it right now. So that's all for my presentation. Any questions. Wait, have you, have you implemented a de-amortized version of this that actually has the worst case, a timely claim? The de-amortized algorithm is, it is now being implemented by an undergraduate by an I and a Barbies is keep going, keep supervising that. Actually, it's just the need to separate the FFT algorithm to square root of m parts, right? Right, yeah. So I have a question. Can you please clarify what is the again the requirement about the running time of the aggregation that you need? Because I got. This page, right. Yeah, right. The last line. So you do you need aggregation and or you need some weaker form of aggregation and also what kind of running time you're assuming about aggregation. That depends on actually this compiler doesn't care for the aggregation algorithms. Our goal is to get the single proofs so it can do the aggregation for those single proofs. But if the aggregation is very efficient like much less than square root of m times, it's better to not use our compiler.