 Hi all, I'm Arancia, and in this video I will summarize the main results of our work on linear map vector commitments and their practical applications. Introduced by Catalano and Fiori in 2013, vector convenience schemes are cryptographic primitives where some entity called the prover can commit to potentially huge pieces of data in a shrunk mode. In a later step, our prover can convince other entities, called the verifiers, about some features on the vector. As introduced in the seminal work, vector commitments were aiming for position openings. That means that the prover can convince a verifier that some specific position in the vector stores an specific value. The more we are using vector commitments these days, the more properties we want from them. And in particular in our work, we start with the concept of linear map vector commitments introduced by Laiem-Malauge in 2019. And in these kind of schemes, instead of opening individual positions, our prover will open the vector to evaluations of linear maps in this vector. One observation we make is that if we have linear map vector commitments that have homomorphic commitments, proofs and openings, then they will also satisfy unbounded aggregation and update. Aggregation is the ability the prover has to take some proofs that have been already computed and create with them a new one that, while having the same size, can convince any verifier about all the three statements together in this case. And among the aggregation as we define it, is the ability the prover has to keep this process going by adding new proofs and aggregating in a potentially infinite process. But it's important here that both prover and verifier can keep track of the aggregation history or can access the aggregation history. Updateability captures the fact that we want our prover to be able to change some positions or one position in the vector. And then instead of having to create a new commitment and a new proof of opening of this vector or new proofs of all the statements our prover has already proven on this vector, what we want is that the prover can adapt or update both things, the commitment and any proof involving this position in a process that is much cheaper than computing them from scratch. This is cool because we have two constructions, one in the monomial and one in the Lagrange basis, both a linear map vector commitments that have all the homomorphic properties that we defined before. They are in the in the prime base setting, they have short proofs, fast verification, as I say before, satisfied all the monomorphic properties and thus they are updatable and unbounded aggregatable. But the bad thing with our proofs is that the prover is very slow, it's linear on the size of the vector. So what we do, we have a bunch of results that offer different provers and memory space trade-offs depending on the vector scheme that they are using. The rough idea, the intuition is that the prover can store the vector in a tree. Well, store the vector in the root of a tree and then every node's children will consist on two vectors of habit size containing the odd and the even positions respectively. The prover can choose where to stop this process, doesn't have to arrive to a tree that has only single elements in the leaves, it can have just smaller vectors which are chunks of the original one. And when the prover wants to make a proof of opening, let's say of an individual position, it has to first open the position in the chunk it corresponds to it. So this is a process that will be linear on the size of our chunk and not on the size of our vector anymore. And then it will have to prove that this chunk is indeed the correct subvector of the original vector. But these proofs of subvector openings, those can be pre-computed. And then what we offer our prover is the following trade-offs. Take M as a power of two and choose new and kappa such that two to the new plus kappa equals M. We enforce that M is a power of two. And then if you compute your tree having deep new, as a prover we have to pre-compute and start two to the new proofs that relate every level with the previous one. And then when you have to perform a proof of opening, you just go through a process that is linear on the size of your small chunks in this case, two to the kappa. That's all from me. Thank you very much.