 So thank you, for the very nice introduction. So let me first start sharing this screen. Okay, so I guess everybody knows is my my screen. So it's very exciting. So thanks for the for inviting me to give this talk and it's very exciting that there is a vector commitment. Today, definitely something that I could not imagine like 10 years ago when we wrote this paper. So, yeah, so my goal today is to take you on a journey in to discover what better commitments are and we in which planets they leave and what they can do. Okay, so, so let me start right so I don't know exactly who the audience is unfortunately so I, I mean my talk is is start from the basics okay so hopefully it's going to be useful to everyone. So let's start with, with about commitment so what commitments are so commitment is this fundamental cryptographic primitive that allows a user that here in the slide it's a user Ellis to commit to a message so this is like the digital equivalence still envelope so Ellis puts the message in the envelope and gives this envelope to Bob and then it's a later point. There is the opening phase and the opening phase essentially it's like opening the envelope and Bob will see this message. So what do we want from commitments. So there are two very basic properties that we wish one is I think so essentially this says that look the envelope is pop up you cannot see the message until the point you open it. And then the second properties binding and stays that once Ellis puts the message in the envelope she cannot change her mind about what is there right so it's impossible that at some point she makes a magic trick and shows that inside the envelope there is both M and then right so only one message is inside. All right, so now what are vector commitments. Okay, so this is an option that was implicit in some previous work by Liberti newman from 2010 and also some even previous work as I will detail later from. Dario catalan on myself and Maragia from 2008. So this is an extension of of commitments scheme which what we now commit our vectors right so it's not just any message but it's a it's a vector so it's a sequence of elements and see an order sequence of elements v1 v2 vn. We want to put the vector in the commitment so in this envelope, give it to Bob. But then the crucial one crucial property we want is that we would like to open the commitment only at selected positions. So, instead of opening that envelope info would like to open just a little all in this envelope and show look at the position I there is very vi. Now, and on the other hand, would like the Bob the verifier to check that this is true. So, given the commitment given the index I value vi and some opening proof vi by I like Bob should be convinced that really inside the commitment and position I that is the. All right, so what are the key properties of commitments or vector commitments because until now we could just say look, these are just commitments for a larger method space. So the first key property of vector commitments is position binding. So position binding essentially says that you cannot open the same position at two different values. So, if you want to be a bit more formal, we are saying that given the parameters of the commitment. It is computationally are to come up with a commitment and to opening proofs for the same position and two different value. So, in particular, it means that you cannot find a commitment see a position I and two values vi and the prime I that are different and to open improves that are both are accepted. Okay, so that's essentially the basic security property and I'm wanting I want to remark is that for vector commitments and at least in this talk, I will not focus on any ID property okay. As in particular, maybe it's not so crucial in in many applications. So the other property that makes vector commitment. Like a non trivial and interesting properties conciseness. So what does it mean so it means that the sides of the commitment and the sides of the opening. is independent of the vector length so more precisely this means that there is some fixed polynomial in the security parameter that says that, like the commitments value and each opening. So you can think about it by this side so this P of London. Okay so you can think of really concretely it's going to be let's say one or two group elements. For these values, regardless of the length of the vector. And then all right, this is the main difference with the with medical. So, medical trees, which is this very popular cryptographic construction by medical can be seen as vector commitments but with the crucial difference that in that case the size of the vector is logarithmic in the length of the vector. Okay so when we came out with this idea of vector commitments we wanted to get something more ambitious in which openings can be of constant size essentially. So this is a summary like more. A summary of the formal notion of vector commitment so it's essentially it's a, it consists of four algorithms, there is a setup algorithm that given the security parameter and the bound on the vector plane. It produces some public parameters and with these parameters you can commit to a vector using the commitment algorithm you can create an opening for position I, and you can verify this, this open. We won't of course these are going to preserve correctness or if you create an opening. Honestly, and also a commitment that verification should accept. This is the one size sense which is what I define in position mining as well. All right, so now that I introduce what you know, at least what vector commitments are my journey starts with applications. Okay, so I want to mention like some basic applications that motivate why vector commitments are useful. Also, I mean I did not say that. If you have questions during the talk, please ask them and write them on the chat, and I think I will redirect them to me, and in particular, if there are questions of something that is not clear in between. I'd be happy to take them leave because we don't have to wait the end of the talk to clarify something. Okay, so the first application of vector commitments that I want to present is something that is not very popular but I want to mention it because it's actually the application why we came out with this, with this notion. And so I'm not going to give you a lot of details, but I think it's an interesting story, even if we don't get to deep on it. So this application is the notion of zero knowledge sets. That is a cryptographic primitive proposed by Nikali Robin and kill in 2003 that allows a pruger or a party in general to commit to accept s and to prove in zero knowledge that some element axis in the set or but some element is not in the set. I wonder how different this is from accumulators, for example, the main difference is that in zero knowledge sets, you want to hide, even the size of the set, and you want to write even an afterbound on the site of the set. Okay, so. When we look at this problem. We look at the state of the art and the state of the art was where this construction by Nikali Robin and kill and then so some generalization by chase at all, in which the work constructing zero knowledge sets by using some specifically construction of sparse multiple trees. And in particular, these were multiple trees of commitments, in which the commitment where mercurial commitment so I'm not going to say what we have commitments are but roughly they are commitment of that come in two modes, like our commitment and soft commitment. And then there was to place the set as the leaves of the tree, and then to build the multiple tree bottom up, but in such a way that when the were elements that were not in the in the, in the set so there were some part of the universe that was completely empty you were building only the root of the salary of this missing out and so this is the, this. For example, the C11 soft commitment that you have there. Now, when you had to create an membership or membership proof that probably was that this opening proofs were linear in the eight of the three which essentially was some, for example, some integer case. So, okay. So, right, so in this paper from with Derek at the moment my regards Messina from Europe 2008. When we look at this problem, we asked the following question like, can we make this zero knowledge set trees shallow. You know, can we reduce the proof length by using instead of using a two to one commitment to use an end to one commitment. Because if that was possible we could have reduced the size of opening proofs from K, which was actually the log in base two of today to a log in base and of today by essentially increasing the branching factor. So maybe it seems like this actual question was originally came from Mario Catalano was finding some connection between these problems you know that sets and the problem of three base digital signatures. So, right, so what we did in in our paper was to then defy this notion of what we call a tractor mercurial commitment which was this commitment for for vector but it was a mercurial commitment. And that should have this property of having short roots, but in our first paper from 2008 we only realized constant size soft openings but we had linear size. So it was not a full solution. So, later, the burden young in 2010 came out with the first realization of an end factor mercurial commitment with constant size openings from based on the end field man assumption over parents. So essentially this was the very first construction. It implicitly was a vector commitment. And later, we formalize this notion we kind of recognize that there was something more fundamental in this idea of committing to vectors and adding short proofs and in particular something that was much beyond the idea of doing mercurial commitments. In this paper from 2000, that we published in 2013 we formalize the view of a commitment and we propose to constructions now based on standard assumptions such as RSA and CDH over parents. And it's quite interesting that shallow Merkel three today are also known as vertical trees and it's becoming a popular idea. So with this application, it was a kind of interesting lesson for me about the importance of theoretical foundational research because like the idea of coming up with this notion of vector commitment came from a purely theoretical question that we asked ourselves. Okay, so let's now go and talk about another application of vector commitments to something much more maybe relevant to actually five point and product allows perhaps. To outsource storage. So let's consider here is user Bob, that has a large data set and wants to outsource the storage of the data set to a cloud server, right. But at some point, I mean, Bob doesn't have space for it so he's going to delete the data, but at some point you want to, you want to know if the cloud is still storing its data. How can these be solved so in the previous solution will be that look the cloud, Bob would store the ash of the data and then the cloud to convince above that he still has the data he would send the data back to it. But that's of course like, you know, it's too much too much communication if every time you have to, you want to make this check, you have to transfer this data, you know, you know, it cannot stay. So, this problem can actually was addressed by the notion of proofs or retrievability. It was considered by Lewis and Kalisky in 2007. And they came up with a construction for proofs or retrievability, using medical treats and this construction we was actually generalized to vector commitment by Ben Fish in 2018. And that's what I'm going to present now. So, the idea is that the data, so can be seen as a vector in which each entry of the vector is a block, so think of a file that you divided into blocks of a fixed length. And now using the vector commitment, Bob can commit to this vector and then I'll send the vector back to, you can send the vector to the cloud. And it deletes it from its memory. And Bob wants to check that the cloud is still storing the data, that is that he would select some subset of positions, each chosen at random. Let's say he chooses K random positions and ask the cloud, can you please send me the data at these positions. Now the cloud would come out with the, would send the data, so VR1, VRK, and for each of these data blocks, it would also give the opening proof. And that is that by selecting K, this integer K to be roughly the security parameter in some, you know, appropriate way that I'm going to discuss in this talk. This is a secure way to check that the cloud is not deleting the data, at least, you know, it is storing a quite large fraction of it, with high probability. It's crucial and why this construction is, achieves the goal is that now that communication complexity of this protocol is linear in the number K of position that we ask, and in particular it only requires to send K times size of the proof. If the opening proof is short, in the case of, for example, it could be log n if it was the multiple three or if it is over fixed size, we are happy because now the communication complexity of this protocol is very short. All right, so let me summarize, so I describe two applications so far that motivates vector commitments, at first, it's quite theoretical and it's actually the application motivated while vector commitments were formed, so why we came up with this idea. It's quite interesting that nowadays it's gaining traction as also as a practical application if you think of using vector commitment as a way to make shallow miracle trees and it's something that is being considered by Ethereum for future updates. In terms of proof of storage, I mean this is a purely, I mean it's a practical application it's motivated by storage and is something for example is in the interest of a five point because it's a core part of their protocol with some with some differences is not exactly what I presented. There are more applications of vector commitments, for example verifiable databases, you can use them to construct accumulators stateless blockchains which is also more modern application that maybe some of the talks today we mentioned it, and also the succinct argument is another is another application. So to take all messages from this part of the talk. I think, I mean at least a lesson for me is that what's the importance of theory motivated research questions like sometimes you have to think blue sky without having in mind very concrete things and you know you can come up with very original ideas. I mean, also the importance of practice motivated research questions, because as I'm going to show in the next slides, then application further motivate exploration of new ideas. Okay, so that's where we are in our journey. Let's discuss what are some additional properties of vector commitments such as update abilities of vector openings and aggregation and now these all these properties are actually motivated by concrete applications, and then I'll move to discuss the sector of the art. So, actually if you have questions on this part it's a good moment also to interrupt. Okay. So the right, so let's look back at the outsourced storage application now. So we made our verifier at the right so he now can outsource the its file and you can check that the cloud is storing it and communication is very short its storage is very short. So very fine exactly. But when we are in outsourced storage, it can actually happen that you want to modify the data the outsource right so then what you do right because they're the, in order to model if Bob wants to modify the data but it does not have anymore the data. In order to continue with this product we should recompute the commitment to the new version of the data but it cannot because it doesn't have the vector V. So, so these are application and similar ones motivate the notion of update the ball vector commitments. So at least something that we introduced already in the CF 13 paper and an update the ball vector commitment is a vector commitment so that is the same algorithms as the one I defined that however amidst the following additional First, it should have an update commitment algorithm that given the public parameters, given the commitment a position I, and that hold a new value at this position so vi is the value that is supposed to be in the commit, committed at position I and the primary is the value that we would like to put the right in that position, and possibly given the opening proof for position I, this update come out great produce a new commitment C prime and C prime is essentially should be a commitment to the vector where a position I said over in the I you have the So, and then also you want an update open out grid. So, this is an algorithm that can update an opening proof at position J. Given an updated position I. Now, J and I could be the same but they may not be the same. So, and that is that is that now the, you know, the update opening algorithm should be produced an updated opening proof pi prime J. And this should be now valid for C price. So now the problem if you recognize is that if C changes and you had a pre existing opening proof for the old commitment. This opening proof would no longer be valid for the new commitment. So this update open algorithm is essentially allows to update the opening proof. Notice that they will make these algorithms, algorithms interesting and on the table is the fact that they can be executed without knowing the entire vector right so you don't have to compute things from scratch you can just have this local very local information about the updated position I, in order to produce the updated values. And, based on the fact of whether this PI values is needed or not we actually can recognize to update ability models. So, if PI is not needed, we can simply say that the vector commitment is updatable. And this is actually the notion that we introduce in our, in our paper in 2013. Some other more recent works also consider the case in which, when you want to update at something get, you know, when there is an updated position I you want to update the commitment or another proof, you may need to know the opening right. And in this case, we say that the vector commitment is in the data book, at least it's, it's a, you know, it's a name that I came up with some other things some other people use a different name for this. So, in some sense, like, of course, like the first model is a bit more powerful because it requires less information and possibly less interaction. So, what, you know, it's very easy to see that with updateable vector commitments we can now support updates in the outsourced storage, because when Bob wants to change a value at position I would just run locally update com algorithm to have the new commitment C prime, and then you would ask simply would simply tell the cloud to what is the update information and the cloud would also run update commitment on the side. And okay in this application, updating openings is maybe not crucial, but you know I want to keep it simple and there are other applications for example the centralized storage in which maybe other users are holding open inclusive different positions that they may be interested to to update their openings. Okay, so there was about updateable vector commitments. Again, let's look at this outsourced storage application and we're going to ask another question about whether the protocol that we currently have is really optimum. And, in fact, right now we're seeing that the communication complexity for each check is order of K times size of the proof. And this is due to the fact that we have to send in addition to the data one proof for every position. And one interesting question is whether it's really needed to send all these papers, and if we can avoid to send all of them. So, so this idea and motivate the notion of vector commitments with sub vector openings. So this is a notion that was introduced by Bonnet Boons and fish into 2019 and also Lyon Malavolta in the same year. Motivated by slightly different application but with the similar with a similar motivation. So, the idea of sub vector openings is that instead of opening the vector at single positions you can directly open the vector at set of n positions. So now you know the input of the opening algorithm is a is a set of indices I one I am. The output is a is a proof by by so it's capital I for for this set and analogously like the verification algorithm as to work now with the subject to be I and with the set of indices capital I. But of course makes this, you know extended extended notion useful is the fact that the size of the proof is independent not only of the length of the vector but also is independent of the size of the subset of positions you're opening it. So it's still essentially fix. Now, if we have vector commitments with sub vector openings in our hands, then it's quite easy to see how to use them to optimize the communication complexity of this proof of reliability protocol. Because now the, the cloud is set of coming up coming up with the proof for every position that it is requested to open you can simply create this batch proof for all the K position so and what he needs to send is K, you know, something like K bits or say K size of the block where we are should have written here, plus one. One proof right the length of one proof so what is interesting is that, like the security parameter is only, you know inside the length of the proof and not in the, perhaps in the size of the blocks. Okay, so that's, you know the motivation for some basic properties of, I mean, actually, a little bit more than basic properties of vector commitments such as update ability and sub vector openings. So now I want to discuss another property that is called aggregation. And to understand why we need aggregation let's again look at the proof of reliability protocol. In particular, let's look at efficiency. So so far, we have worked to make our verifier very easy, right so yes short storage communication we kind of optimize it to be, you know, K plus a size of a single proof, but how is the life for the I mean, we always make, you know, talk about such thinness and we make our verifiers happy but the proof is also concerned with efficiency. And unfortunately, the he has a tough life, especially if we look at the vector commitments construction that that constant size openings. But there is a huge problem there. And the problem is that when the cloud in this case is application as to execute the opening algorithm. The running time of this opening algorithm is strictly linear in the length of the vector and more critically it is it requires a linear number of public operations like group exponentiation, for example. And this is very expensive in practice, if you have to run this with very large, very large data. So, motivated by this, by this problem, we came up with this idea of aggregating aggregating proofs in particular to use them to solve this problem. So first of all, what is aggregation so aggregation is. It's a problem that says that, as soon as you have one opening for each position so you have a vector and yet the opening by one by two by n for every element of the vector with aggregation what you can do is to publicly take these opening proofs, and you can aggregate them and now you can create a sub vector opening proof for, for example for position one and two, by only knowing the opening cruise for position one and position two, and what are they also there, the opening values. So, in particular, and also you get more in general it's not only about merging proofs for one, you know, for two posts for two single positions but also about merging sub vector openings for different subsets I and J. And the key properties that they can be completed without knowing the rest of the vector so you only know what the, the sort of local openings and the size of the opening cruise that you start from and that you obtain from aggregation is still fixed. Okay, so, what can we do with, with aggregation so in this paper that we published at Asia for 2020, we're together with Matteo Campanelli, Dimitri Colonello, Sonegola Grego and Luca Nizzardo, we came up with this idea of using it to speed up the opening. So the idea is that when these, so the prover could execute a pre computation rates in which you would compute an opening proof for every position and you would store all these openings. Now, whenever it is asked to create an opening for some, for example a random subset of positions in this example it say it's position two and position I, you would then use the aggregation algorithm to aggregate the proof. And, and then to send the aggregated proof to the, to the verify. The nice thing is that, after the pre computation that open the online opening costs can be constant or, you know, let's say it's linear in the number of positions that it is asked to open. Actually, we have to also consider the cost of pre computation and here the observation is that if you can manage to have a complete pre computation of all these openings in, for example, quest linear time like and login. Then we can say that the amortized cost per per opening in the of the fast opening is logarithmic. And because here I'm what I am not maybe stressing about is that this requires storage auxiliary storage right so because in order to produce this, having to this fast opening the, the, the prover needs this and proofs to be stored. Now, there are two questions that we can ask at this point. So first, is it guaranteed that we can do this compute these opening proofs in a long time. And that's not real because like, if you have to exit compute them using the traditional open algorithm that was requiring any oven time then this might really require oven square and which is not then we wouldn't give us any amortized costs. And the second question is whether we can do better in terms of auxiliary storage because storing like any proofs can be a lot, especially in some construction where maybe like each proof is a need an order group element. So, and to address these two questions in this paper we actually went farther and introduce the notion of incremental aggregation. So incremental aggregation is what the this word say so essentially is the idea that you can keep aggregating things and about the number of times. So, you can start from proofs for single elements you can aggregate pairs you can then keep aggregating until you want. Okay, at the same time, you know, we also want this incremental aggregation to work in the in the opposite sense, in the sense that we should be able to disaggregate truth. So if we have a sub vector. So if we have proofs for for example position two three five six eight, we may want to extract a sub vector opening for a subset of those positions. And again, we want to do this an abundant number of times up to reaching proof for single single values. Okay, so with incremental aggregation, we can do. So we improve the this application of increasing of online opening time. So how can we do this. So, the, the idea is that in the pre computation phase, instead of computing one proof for her element. We can now compute a proof for every four chunks of elements so we divide the, the vector in chunks so instead of just single position. For example, we can create. We can pre compute a proof for every pair of elements for one for one to one for three four one for five six and so on and so forth. And, and then what you have to do when you are requested to create the opening for some positions. Disaggregate I'd say it's position two and I so we would first use this aggregation to extract the relevant single position that you that you need for some you want to extract pi two from pi one to pi I from pi I and I plus one and and then you use aggregation in order to put them together and send a single proof. This is a case where you would need to use like aggregation and disaggregation in interchangeable way. And the, the nice thing is that now the size of the storage that you need is, is like, and over be loose for some integer be that you can flexibly choose, which tells you how many chunks you're, you're creating, and the online opening time is the login. So now it's the same that you have a you can achieve a lot of tradeoffs according to how you want to choose be if you want to minimize storage and or you want to prefer faster opening time. But there is something that I have not discussed so far, which is, again, like the online opening costs is profitable if we can amortize the pre computation. And what's the guarantee that we can pre compute proofing and login time. And a very interesting. You know result we came out with is that if you have incremental aggregation, this property alone allows to argue that you can obtain a quasi linear pre computation. So, I will not enter in the details of this result, but the idea is that if, if this aggregation is actually incremental, you can compute this aggregation and divide in a divide and convert manner in a, in a, in a pre fashion. And this is allows you to obtain, like, I mean, actually that's what what you have, like that you have an login pre computation time. So, if you want to ask a question, it's a good. Yeah, there is a question in in the chat. So don't you need to calculate the PII plus one proof in on for the desegregation. I want to clarify more about the question. You mean. Okay, if the question is before aggregation. So for the desegregation, what's the, the computation effort. So the idea is that the right is actually what is in the statement of the theorem, like what what you want is that the running time of desegregation is linear strictly linear in the size of the subset of the of the proof you are starting to desegregate the sub vector opening of for a set of sites K, the desegregate algorithm should take order of K time. So, if I plus one is a person is an opening of a set of sites tools. But the idea I mean maybe I can give this also need to give a more addition about of this theorem is that you can compute once a sub vector open for the entire vector right so it's a legitimate sub vector opening like the sort of a pie one to end. And these would cost you linear time you computed you compute these ones. And then you can desegregate this in a divide and conquer manner by essentially extracting all the single values from these large sub vector opening. So that's the idea of this of this linear pre computation. And the idea is that you start from something that as would have cost order event to desegregate and then you in the next step is for order of n over two and then over the event over four and so. But so if you're interested in it I really advise you to look at our. Okay, so now it's time to look at the state of the art. So we have looked at the application so I'll. Now I want to tell you what do we know about realizations of vector commitments with all these, all these programs. Right so on these are going to look at the planets of vector commitments that in our case, the planets are like different planets were different assumptions old. And I'll tell you a bit of this, you know, how we can classify the existing constructions. Okay, I have to disclaimer is that in my state of the art. At least here I'm only considering vector commitments with constant sites opening cruise and not with logarithmic openings except for lattice basing and Merkel trees. Because I think it's interesting to to mention that. So we have the planet of collision resistance functions and we know that Merkel trees are there and we can see them as like vector commitments with logarithmic sites openings. And then I already mentioned that even before we formalize this notion that we're to constructions actually one is the one by Libert and you that was this a trap the mercurial commitments. I mean, which implicitly there was a vector commitment. And the other one is actually this very celebrated paper by now which is the cottage of the rules for goldberg polynomial commitment that was a polynomial commitment based on bad things. And using polynomial commitments, you can easily construct vector commitments by essentially creating the polynomial that interpolate. So both these schemes existed prior to this to the formalization of this notion and would yield constant size openings, and they were both based on n type assumptions of their parents. So, with our, with our work I mean besides the formalization we actually came out with the first constructions that were based on standard assumptions. One was based on the RSA assumptions and the other one was based on CDH over pairing groups. And again this now shows that also we we knew we know vector commitments over the planet of groups of unknown order, right. Okay, so in 2013, there is also another paper by a woman to she come on CNN. Okay, the last other apologies for that. I suppose a vector commitment based on luck. This is, and it's based on as a assumption. And this commitment as polylogic things size openings. But I think it's interesting to mention it, because one interesting property that's compared to Merkel three is that it's fully applicable. Right. So that's nice. So let's say non-trivial construction because otherwise we can of course instantiate Merkel sees with that is based on functions. Okay, so then there was a really for a number of years there was no much, you know, no many results on vector commitments, but then in 2019. Lion Malavolta and also Bonnet Boons and fish came up with this idea of vector commitment with some vector openings. And in particular, like Malavolta proposed a construction that essentially extends our CF 13 scheme based on RSA and also our another construction that extends our CDH base team with the interesting and you can obtain some vector openings. And I'm drawing this, you know, that borders to show that they are essentially in the same family. And in particular, all these things that we are going to see are within a few number of families there. They're not very diverse. So that's with some vector openings. And in 2020, we published this paper where we proposed actually two constructions both based on groups of unknown order and both have this incremental aggregation property. And in particular, one of them, however, is only hint updateable and the other one is not updateable at all. And again, the what I doubt as a number two scheme in our paper, it's a scheme that essentially extends the line Malavolta construction to obtain incremental aggregation but also to obtain constant science public parameters, which was an open problem. And in the first team instead is in another family that we call like accumulator based constructions or RSA accumulator based constructions, and it's somehow similar to the BDF 19 scheme even if not exactly, you know, I mean it is it shares some similarity with it. Okay, so in the same year, there were other words that were extending these constructions based on parents in order to achieve aggregation and sources of vector openings so there is this paper by Thomas with all published SCN 2020 that shows how to obtain aggregation for polynomial commitments and another paper by Gordon of it all also known as point cruise that shows how to make the debut record commitment, essentially with some vector openings and also with the one up aggregation. The more recent paper is the one by Agrabah and all from actually 2020 that also known as KBAC is a key value map commitment that is, is updateable and there's some vector openings and this it has also only one up aggregation what is interesting there is that it's compatible with constant sites parameters. So in turn, it sort of improves over our second construction, and also another interesting thing is that they obtained this property by sort of using together both approaches from based on CF 13 and document. And probably the last work in this, in this research line is the very recent work by biker that all that showed a lattice based vector commitment. And again, this is already inside openings, and, and this based on this assumption, and again it's interesting what is interesting about this construction is that it's updateable. Okay, so I could be more precise about the state of the art and I'm going to skip this use table for now but it's interesting also to see the different trade also of parameters. Right, what one thing I want to see about the stable is also to motivate the last five minutes of my talk is that I want to talk about what's the idea of constructing vector commitments in groups of unknown order. So right now are the vector commitments that give us sort of the, achieve the best properties like constant sites parameters constant sites openings constant size commitments, and also incremental aggregation and so vector openings. So, I think it's quite powerful. I mean construction there, it turned out to be quite powerful for. Before, before you go into the construction we have a question and maybe your table answer to it. Do all these constructions require trusted setup, such as CRS or group generated by trusted party. So what trusted the setup necessary especially in post quantum setting. If one wishes for constant size opening. Okay, it's a very good question and also because it's something that I was not mentioning in my slide so Right, so essentially the main most of these constructions require trusted setup trusted setup can be avoided and can be made transparent what is called transparency that are essentially like a random string. And the parameters in the construction, based on in the northern groups, if it's unshaded using plus groups. So essentially it's this course, I mean that yeah the planet of the northern group would yield construction based on transparent setup. So essentially the, of course, Merkel trees and the construction of apartment to at all from 2013, it can be instantiated. Like using transparent setup because I think you did the public parents can be a random metrics. Whereas the scheme based on lattices by biker requires a trusted setup. So this answers the question. Yes, and there is another question. Where does fly based polynomial commitment fit in here. Right, okay, right, right, right. I should be a good question because one thing that I'm not considering this table and I apologize because like the sometimes the, the space of construction is very vast is very broad. And there is an intuition like behind the question about right. And then the intuition is that any polynomial commitment is a vector commitment. Right, because you can interpolate the vector and use then evaluation opening clues to create an opening clue. Now, we know a lot of polynomial commitment, I mean, some, some, I mean, we know polynomial commitments that however require random molecules. And I'm excluding them from here and in particular, one other reason I'm excluding them from the stable is because they are. They don't have constant sites openings. And then another reason is that in general I'm excluding another construction that would, you could obtain based on non non falsifiable assumptions by using snacks. So you also by using snacks you can construct. And there is a kind of related question, we know that from any polynomial commitments we can have a vector commitment. Is it known if vector commitments also imply polynomial commitments. Okay, so there are some vector commitments that have so called linear, so inner product openings, so called functional commitment for linear functions that yield polynomial commitments. I don't know about the direct construction of from any vector commitment, even though I'm just thinking that maybe the right protocol maybe it could be seen as a something like that, but I'm not putting my finger on it. Thank you. Maybe, maybe I can use, like a few minutes more to go and conclude my talk because I will show this table again in my last slide to discuss open problems. And yeah, maybe okay I also for the sake of time and go very fast about this, because I think maybe people are more interested in knowing about state of the art and future future work. So very fast so what's the idea of constructing vector commitments in either an order group so the idea is to create like. Okay, so the idea is to create the commitment to the vector as a sort of Pedersen commitment with this basis as J. Now, instead of adding as J to be random group elements. So, what we do is to create each as J as the product to as sorry as G, which is some fixed generator to the power of. Okay, all the primes in a sec. So you first, sorry, I didn't say this so you have to first select n primes you want to end that have to be sufficiently large, like more than two to the end. And then you associate each prime to position J and then as J is the G to the power of all primes except the Jake one. So, this means that when you want to commit to this vector. Product actually becomes a sum in the exponent. It's a sort of an inner product in the exponent of the eye time. And you can see that, like in each. So here, this column is like, should be seen as a product of all these primes and they need in the column that I prime is missing. And then you commit and now this encoding allows to achieve, you know, to make the opening cost. So that when you want to open to position I. I would say that if you take all the EIs, so all the EJs except the I one right. I is a common factor in all of them. Right so. So this means that if you take now the linear combination of all these E1, E2, EN except the I, right, so I is the position you want to open. Then the prime small EI is a divisor of the sun. And the basic idea of opening is to create a proof that EI is a divisor of the sun. And to do this, you know, making such a proof essentially comes up with the saying that if you take the C and you remove as I to the vi you can actually compute any truth of this value. I'm trying to go fast because we are in the plate but that's the basic idea. And, you know, now the proof is a single group element and the same is for the commitment. The idea of verification very fast is that if this is the exponent of the opening elements. If you raise it to the power of the I, we are kind of placing back the I in these each column right and then we can complete with the missing element of the inner product. That is supposed to be the commitment. Okay, so I'll skip this for the sake of time and the thing. And I want to now again discuss, you know, to conclude discussing what are some open problems and the state of the art. Okay, so the state of the art is more or less what I showed in the with my picture with the families of the planets of vector commitments. Open problems and for to look at open problems. It's interesting to also look at this table where we can see more of the parameters. So, in terms of. So, I mean, some interesting problem problem problems are about assumptions. Until recently we did not have many lattice based constructions and still until today, all the lattice based constructions require like both commitments and opening proofs to be logarithmic or poly logarithmic in the length of the vector in my table and it's rooted in my table I'm showing only one instantiation of the lattice based vector commitment so they have actually a framework in which you can play with the parameters but this is the framework where you use binary to be but I think that it's a very intriguing open problem whether it's possible to achieve constant size opening with lattices if it's possible at all. And another interesting problem is whether we could construct vector vector commitments from groups of prime order without it. It's, we know that it's possible to do it if you use random oracles and if you go with logarithmic size proofs and the bullet proofs is an example of of this. But not in the standard model with this is we are not aware aware of any construction, and I think we are going to hear a talk by Alex. This problem can be attacked in some ways. But for me is a very intriguing question. And then there are more maybe like specific question is, what do we really need to obtain incremental aggregation. Because right now the only schemes that have incremental aggregation are over groups of unknown order. And I mean the one question is it possible to obtain like incremental aggregation in primordial groups and another open problem in primordial groups is to achieve constant size public parameters. So all the schemes that have constant size parameters essentially require, require groups of unknown order or lattices, you can, as you can see you can achieve it with the poly logarithmic size parameters when you, you go with also log n proofs. Another problem that seems interesting to me and the we have to look into it recently is whether we could obtain structure preserving vector commitments. And what would be interesting about them is that it would enable to obtain so called, you know, an algebraic version of these ver countries like where you can commit to commitment without needing to, to switch from different algebraic structures. However, this is tends to be very hard because there is an impossibility result by. Aralangian and, or Google from 2012 that says that it's impossible to have group to group commitments that are shrinking. But I think it might be there are ways to. There may be some ways to bypass it because they were only considered a case where you start from one group and you end up with the same, but maybe one could investigate whether it's possible to switch groups, for example. And yet, with that, I think I like to conclude. So I hope that I gave you a useful survey of vector commitments. There is a lot more to be understood about vector commitments. There is also an exciting emerging area about functional vector commitments that I think is going to be discussed much more in this talk in these in this vector commitment day and the other speakers will talk about this. So thanks a lot for your attention. Very happy to take questions. Hey, thank you so much, that was a really nice, complete and detailed overview of the area. And I think like now we have the necessary background to understand new construction and new result that will be presented today. Any questions, comments. Okay, I think there is nothing new in the chat. So maybe because we are a little bit behind schedule, we will just move on. And thanks very again. Yeah, happy to have you. And happy for this invitation. Thanks.