 So thanks for the invitation and it's a pleasure to speak about fantastic trees. There were some fantastic trees presented already in the Vector Commitment Day by Chris and the Bridges Talk on the Speedlog based BCs. So my name is Aline. I just joined Apo's lab as a research scientist. We're building an L1 for smart contracts. We're looking to hire, so feel free to talk to me if you do research or engineering and join us. The goal of this talk is twofold. First of all, we're not going to go in depth on any of the constructions. That would take too much time and there's too many constructions to talk about. So instead, I kind of just want to paint a landscape of what's out there in terms of trees. And even I can't even paint a full landscape that some of the constructions were presented today actually, but I think it'll still be interesting for you guys. And really the goal of this talk is to spark curiosity and incite new research in this space. So I hope you'll come out inspired to work on tree-based BCs from this talk. Okay, so we're going to start by basically dismissing the BCs with constant size roofs as amazing as they are. Then we're going to talk about local trees and dismiss them too because they have some problems. And then we're going to talk about the three or four constructions, lattice-based constructions, polynomial-based constructions, and vertical trees, which Chris briefly touched upon. Okay, so let me start by dismissing constant size VCs. So there's a long line of work on constant size VCs started by Catalano Ture and Messina, I believe is the earliest reference, and then Libert Jung and KZG. They're great. You have a vector of, let's say, eight files. You can commit to them using your favorite constant size VC scheme. And then you can compute proofs for each individual position. So for example, you have Pi 8, which convinces anybody that F8 is the eight file under the commitment C, enable the laser pointer. And they are nice, and it takes you at least linear time to compute all the proofs because each proof is constant size, so you have to output 10 things. So you spend that least omega n time. But unfortunately, when something changes, like let's say the fourth file changes in the vector, you not only have to update the commitment, but you have to update all of these proofs. And now, since these proofs are constant size, they don't intersect at all, they don't share any information. So this really implies an omega n lower bound on the time to update all proofs. So I basically came claim that the constant size VCs are not maintainable because of this and this is quite inherent. And there's some things you can do about them and we do want to talk about this to some extent like you can do time space tradeoffs, where you defer the updates, and you only apply them when you serve the proof. And that can be nice for some applications but not for others where you just want to serve proofs on the fly, instead of spending computation time. Okay, so this is why we don't like constant size VCs and we like tree based VCs better. What trees will of course Merkle trees are the first trees we think about we have a bunch of leaves we hash them, then we repeat hierarchically you probably all know this I'm not going to bore you, and we get a Merkle root. Okay. This was a seminal work by Ralph Merkle which is, you know, everybody loves this stuff. And how do you prove something in a Merkle tree you give a path of sibling nodes and the verifier takes the leaf that it's verifying and recomputes the hash along the parents by using the siblings, and then eventually checks that the hash to verify got is the same with the root hash of the Merkle tree. That's the ground truth. So if you log in size proof it's very small, you're sending 32 bytes per sibling note for example it's very efficient so a proof of in a tree of four billion things will take you something like 960 bytes can be very small. The time to update all proofs is also really nice and Merkle trees because for example if age for changes. I only need to update this path of from age for along to the root right so I'm only doing logarithmic time work. That's why Merkle trees can be better than constant sizes is the time to maintain proofs is is much much more efficient, the only do logarithmic work they don't do linear work. All right, but Merkle trees are not perfect they they present several challenges and in this talk I want to talk about three of them, specifically the lack of stateless update which Chris's talk talked touched upon the lack of proof aggregation or I should say efficient proof aggregation and the lack of database friendliness when you actually want to store Merkle trees on this as is the case in many cryptocurrencies like Ethereum, and so on. So first, what do I mean by stateless updates Chris touched upon this but imagine you have a setting where a verifier just maintains the Merkle route the digest the commitment of your vector in other words, and you have people who tell this this verifier that anyway you know that the third file in the vector changed by Delta three can you please update your root hash. And if you use Merkle trees this verifier has this root hash only and doesn't have the tree can do nothing to update it. In fact, in order to update it, this verifier must be given the Merkle proof for the old F3. And what the verifier will do is it'll take the new F3 and hash up to obtain the new route, and then the verifier can update it through. Right, so this is not stateless. You need to give the proof for the modified file in order to update the vector commitment. Right, and in some applications this is problematic and Anka touched upon this in her talk when talking about stateless occurrences. So that's one problem with Merkle trees. Another problem is proof aggregation if you're proving one thing like F3 this is nice but if you're proving two things. Well, you kind of have to give two proofs, you do save a little like you can see you don't need to send this fashion more because you will compute it as you have shop from F5 so you do save a little, but not a lot really asymptotically if you're proving K things you still have to give K Merkle proofs. So it's K log n size, the aggregated proof size, and there's not much you can do about this in Merkle trees beyond just to throw a snark at the problem and aggregate these proofs using a snark which can be quite inefficient or might require you to use these new snark friendly hash functions that perhaps need more crypt analysis for us to get comfortable with them. And lastly, this is something that I was thinking about doing my PhD early on but I never got to work on Merkle trees are not very easy to store on this. It turns out so imagine I have a Merkle tree of n leaves, and I want to store it on this. I, at minimum have to store these n minus one internal nodes of the Merkle tree on this. Right. So I have a linear overhead in terms of in terms of this overhead. And it's a bit worse than that because a database has its own overhead like usually you put this stuff in a key value store where where the key is the position of the node and the value is the hash of the node, and databases use their own trees internally like LSM trees or B trees or whatever, and they have their own overhead so you're storing a tree inside of a tree and you're doing a lot of reads whenever you want to read a simple hash. There's a lot of right and read amplification. This is really a problem in practice and I think if you want to be convinced of this because you don't believe me, just ask anybody who's implementing a cryptocurrency and they'll tell you, writing and reading Merkle trees from this because actually a big slowdown in the validation during validation. In fact it's even worse than that because often you want to create a persistent Merkle tree where you can keep track of the version. So for example, you want to store both the path for h8 but also the path for an updated h8 using path copying, and that implies kind of a logarithmic overhead for update. And it's a bit worse than that. Okay, so now that I told you about all of these things that Merkle trees are not so good at, I want to go over a few constructions that address these things of course there's no single construction that addresses all of them. I'm going to do a little bit of future work and I'll touch upon that at the end of the talk. And I want to start with lattice based the seas. And as, as Chris pointed out the first work on lattice based PCs is by Papa month to at all. And the key ingredient in their construction is this algebraic hash function. Normally we use shot to 56 we input two hashes to shine we get a shot hash right in a lattice based PC by Papa month with all this hash function uses two matrices L and R and does a linear combination of the two inputs. Now, this is an extreme over simplification. I'm kind of pretending like you can take the output of the hash function and feed it back in. But in practice, in this construction, you can't actually do that. The co domain of the hash function is not the same as the domain and there's a layer of complication that I'm going to completely ignore due to lack of time. But I still think the key ideas will come through as we'll see in a second. Again, the goal of this talk is to inspire you and stir up your curiosity so that you can actually read this paper. After. So, how does the lattice based construction by public throughout all work. You have your leads in the tree and you apply the hash function to them hierarchy key so so you hash this this last level you get a new level. And now you continue on this on this level to so for example you take these two children to take L times left child R times right child, you can distribute the matrix product can do the same thing here on on on these guys and then finally you can take the root left times left child R times right child, again you can distribute the matrix product so it kind of looks like this. And it's pretty nice because the root is a linear combination of these of these children, as you can see. But of course, because of the oversimplification I made, you might notice that, for example, eight six and eight seven would collapse together into the same product because the matrix is should commute the matrix product should commute the matrix product, so this is the same matrix product, but in practice that's not actually what happens because the matrix product doesn't commute this is just a, an oversight of my oversimplification. So let's just pretend this doesn't happen and please read the paper if you want to learn more about how this actually works, but at a higher level. The idea is that the root digest is just a linear combination of the leaves. And this is very nice because now what you can do is if the second leaf changes by Delta two, you can really easily update the root like you can update this LLR times Delta two update to this part of the digest right, and even better like if you have a proof that you want to update you can do that on the proof to like you can update the proof here with LR Delta two, and you can update the proof here with our age to right, so this is what we call a homomorphic we see. I'm not showing it in the slide but what I mean by homomorphism is the property is actually is actually homomorphism like if you had two vectors, and you built a tree over each one of these two vectors, you could add up the trees, and you would get a tree for the sum of the vector. That's actually what this construction supports, which is really neat and actually has a lot of applications which have not been explored. And I encourage all of you to think about that. This kind of is all I want to say about lattice basis he's of course as much more to say. So let's go a little bit about the advantages and disadvantages of this construction. So one advantage is that it's homomorphic but again a subtlety here is that it's homomorphic for a bounded number of updates. So you cannot actually apply too many of these to the tree. You can only do a bounded number of updates which is polynomial in some security parameter or something like that. They have a public setup which is really nice you can just pick this hash function anybody can do it and just like in a Merkel tree. It's somewhat snark friendly there's some previous work by Jean get all that explores using snarks on this PSUI 13 construction. So unlike more countries, for example, and their post quantum secure as disadvantages. The proof size can be quite large. So one paper estimates that it's 70 kilobytes using. I want to say polynomial ring lattices which are supposed to be more efficient. And they're slow in practice to so updating the digest or verifying the proof or updating the proof all take hundreds of milliseconds. Nonetheless, what was really cool about the PSUI 13 work is that it showed for the first time that you can have a homomorphic vector commitment or formal homomorphic mercury. And that's the key thing you should take from this and you should think about ways to leverage this homomorphism. And I think as open problems here of course improving the efficiency is crucial getting sub vector proofs, including proof aggregation like Chris mentioned is also extremely interesting. And the subsequent work by Chris pie card at all that that was just presented shows how to get smaller proofs using a trusted setup, and also shows how to get a homomorphic lattice based vertical tree from lattices so I encourage you to to check out this paper as well. I will have references at the end of the talk so you can see what these papers actually are. Okay, so this is a lattice based VCs. Now I want to talk about polynomial base VCs. So again lattice base VCs kind of address the stateless updates with polynomial base VCs, we're hoping to address the proof aggregation and the stateless updates. So here I'm going to talk about two lines of work the address line of work CPC 18 and the hyper proofs line of work SCP 22 the key ingredient in both of these works as a multi linear polynomial commitment. I will not spend time talking about what this is. But I encourage you to check out the references and instead I'm going to give you an idea of how the address and hyper proofs tree works. In both of these works, hyper proofs follow the tracks so it was inspired from the tracks and, in fact, hyper proofs added this tree construction that I'm about to present to you. Adrax didn't have it. So what's the idea the idea is that let's say you have a vector of eight elements h1 through h8, you can pick a multivariate polynomial of three variables that encodes the vector as follows it takes the index I. It splits it into bits so the index is from one to eight that has three bits and using the bits in as as the variables of the polynomial you can get the value of the sector back so we all know how to do this well maybe not all of us but but it's not complicated to do this. So let's say this polynomial, and then you can actually build a tree and unlike Merkel trees, you kind of build this tree from the roots to the bottom, rather than from the bottom to the root and you proceed as follows. You start with this multivariate polynomial in the root, and you do a division you divided by x three minus zero x three is the first variable of the polynomial. And you get a quotient and a remainder, as you always do when you divide a polynomial by something. You get a quotient in the root in the parent and commit to it. And the remainder you remember it. Now, in the right side, you also divide by something slightly different x three minus one. You also get a quotient and a remainder. The quotient will actually be the same as the quotient they got here, and you've already committed to it in the root. The reason being that these these monomials just differ by a constant, but the remainder will be different. So you have a remainder here and a remainder here, and you are ready to recurse. So you will now recurse on the remainder you'll take that remainder from this node, and you'll divide it by x to minus zero and x to minus one. That'll give you also a quotient that is the same, you commit to it here using the multi linear commitment scheme. And then you get a remainder and you will curse. Same here you had a remainder, you're cursed on it, you get quotients to put it here, and you do this until you exhaust your variables so here I had extreme here I have x to here I have x one. And once I have x one I will be done actually the remainders that I will get will be the evaluations themselves will be the vector elements. Why is that well intuitively you're doing five module X three minus zero module X two minus zero module X one minus zero, which you can prove that it's actually just five evaluated at zero zero zero, which is just five of one, which is each of one as defined here. And, and the same holds for for this guy and all of these other guys. So, now this construction. This is all I want to say about this construction the ideas that you do. Let's say you're dividing conquer repeated divisions your curse on the remainders, you commit the quotients they get inside of the notes and then a proof. Unlike in the Merkel tree approval consists of the quotient commitments along the path to the leaf that you're proving so for each one, for example, the proof would be the quotient commitment here, here and here. And that will be approved. And in a miracle tree if you recall you gave sibling that so this is quite different in some sense. And another thing that that's not visible here is that this construction is also homomorphic like the lattice space, and that's really one of what I want to touch upon next like the advantages of this construction is that it's homomorphic and it's actually for an unbounded number of updates, unlike the lattice based one. It has the same proof size as Merkel trees. The reason being that these quotient commitments that you put in these notes can just be 32 bytes. So it's quite efficient in practice in terms of proof size, and it's efficient in practice in terms of computation now it depends who you ask you know if you ask a lattice based researcher. It's definitely much faster than lattice is probably 100 times faster. So if you ask a blockchain developer this is too slow. And trust me, I work with blockchain developers these days and they think it's too slow. So what what a researcher thinks is reasonably fast in practice doesn't hold for engineers. The other advantage is that it's friendly to inner product arguments for those of you who know what these are and they were mentioned earlier in the discrete lock talk. And in fact in hyperproofs we leverage this IPA friendliness to do proof aggregation. And to also build sub vector groups. And additionally because of the unbounded homomorphism, you can get a property called on still ability, which is again something we exploring hyperproofs. One of the things that you can find still ability is that when you compute one of these trees, which again if you ask the blockchain engineers they'll tell you it's very expensive to do. When you, when you've done all of that expensive computation you can actually watermark your tree with your identity to make proofs unstealable, meaning that every time you serve a proof from the tree that you computed. It's watermarked with your identity and everybody can see that you computed it and nobody can change your identity in this group. Right. I believe this could have interesting applications and cryptocurrencies this type of on still ability. Of course, there are disadvantages it's not post quantum unlike the lattice space construction. It does require a trusted setup unlike the lattice space construction, and even even worse it has linear size public parameters so your vectors if you want to commit to and leaves, you need to have and public parameters that are generated by this trusted setup here. And another disadvantage is a sub vector proofs are quite large 54 kilobytes so aggregation will only start making sense once you aggregate about 100 proofs or so. I think what is cool about this line of work is that it added an unbounded homomorphism and it showed that the lattice space construction can be efficiently instantiated in practice, using of course a bunch of caveats, like the trusted setup in the public parameters. And then what the what the hyper proofs line of work did is it showed that, well first of all that you could build this tree you don't have to compute proofs individually you can build a tree efficiently. You can do proof aggregation using IPAs and then the instillability. And here as open problems I think it's very interesting to start and reduce the proof size, either by improving the IPA or coming up with new strategies, and to do it much much faster the aggregation, whether again by improving the IPA or coming up with new strategies. And then we have a bunch of in depth talks on hyper proofs. I have links here to a video presentation in the slides, and I recommend all of you watch Shravan stock at using security in August on hyper proofs. Ali, can I. So, just for curiosity, can you give like at least an intuition of how you do some vector opening proofs in the tree. Yeah, so, because I guess you aggregate right so you you don't. Yeah, I think I can but but I have to go to a different presentation. I don't want to. No, no, no, it's all good so so it's the idea will be that proof verification. It looks kind of like this. So when you verify a proof you're just verifying a bunch of pairing equations hold is a pairing here. PSD of FR is the vector commitment. These are the quotient commitments that I mentioned earlier. VI is the value of proving so you just have a bunch of pairing equations when you when you prove a bunch of different positions, and using an inner product argument you can prove that all of these equations hold very very succinctly without giving all of these quotient commitments you can just prove knowledge of quotient commitments that satisfy these verification equations. Is that makes sense. Yeah, now it's pretty good. Yeah, and what we use is the BMM 19 inner product argument that generalizes bullet proofs to not just. Yeah, that generalizes bullet proof to parents. Okay. Thanks for the question. Okay, I want to tell you a little bit about another construction that was kind of in between edgars and hyper proofs this work occurred in between edgars and hyper proofs. It's called an authenticated multi point evaluation tree. This is work I did with my high school students. A few years ago, while I was a graduate student. And here is a univariate permanent commitment like KZG 10, rather than a multivariate one. And again so this work was done after edgars and in some sense was very inspired from the edgars construction, and then hyper proofs was inspired from both of these works so the idea of this tree from hyper proofs that I just presented was really kind of inspired from the tree in this amt work, just applying that to the edgars construction and of course everything is rooted in the lattice based construction from Pokemon to adult. So what's the idea here, it's very similar to hyper proofs or edgars it's you encode the vector in a polynomial but the polynomial is univariate so you use an index directly to access the vector elements inside the polynomial. And you're also going to do repeated divisions you're just dividing by different things so let's say you have a vector of eight things you have your interpolated polynomial that it encodes the vector. You divided by an accumulator over all of the positions so an accumulator is a polynomial that has roots at all of the positions. You get a quotient and a remainder, as you always do when you divide two polynomials, you commit to the quotient in this node using KZG or univariate polynomial commitment scheme. And you recurse on the remainder you have one remainder from here you're going to divide it by the left half of this accumulator and by the right half of this accumulator. And you're going to get a quotient here and a quotient here they're going to be different, you're going to put them, you're going to commit to them, and you're just going to repeat. Like on the remainder that you got from this division, you divided by the left and right house on the remainder that you got from this division divided by the left and right house. You keep doing this until you run out of your accumulator. And in the leaves, you will get the evaluations themselves. Again, the idea being that you have five module low this huge guy, which contains x minus one then five module this other huge guy which can take six minus one, this and this. Again, you're just doing five module X minus one, which by the power one and remainder theorem is five one, which we know is equal to H of one. And that's that's the idea for why this works. Again, it's a different approach and Merkel because you're starting from the top and you're going all the way to the bottom. And again, this is homomorphic to. So it actually has the same advantages and disadvantages as a Jackson hyperproofs, except this construction. As far as we can tell has a quadratic time trusted setup. If you want to be able to support homomorphic updates. And it has n log n size public parameters rather than n size public parameters. So open problems here are to fix the setup and the public parameters and wager one who just gave a talk earlier, I think, is doing some ongoing work on fixing this and the n squared time trusted setup I think you already fix the n log n size public parameters I'm not sure if you did but but feel free to to discuss with him. I think both of these are very addressable. And in that sense, once you address all of these. If you think about amp these and address their kind of dual constructions to one another. One uses multivariate other uses univariate. But I think really the holy grail of the same deconstruction would be to figure out a way to take the same deep proof, which is just the quotient commitments along the path to the leaf they are proving and figure out a way to compress it to a case G proof. Actually like algebraically do that, because if you could do that then case G proofs are aggregatable and that would imply aggregation for amp peoples. And I haven't been able to do this and embarrassingly like this could be something very simple like maybe it's just a bunch of arithmetic tricks that I'm not seeing. So if this is possible that would be amazing. If this is impossible then somebody should prove it I tried proving that it's impossible but I couldn't do that either. So I think this is a really nice open problem to try and work on. We have an in depth blog post on amp that I encourage you to read we have slides and we have a bunch of video presentations to, they can look at. Good. So that kind of covers polynomial base PCs which address status updates and proof aggregations and have other properties like on steel ability and can be reasonably efficient in practice depending on who you ask. Lastly, let's see what we can do about database friends. And here I want to point to this vertical line of work. So, in the literature. There is actually the idea of having a large area tree with Eric K for arbitrary K, you can find work as early as the work now or in 98 who use this for a stateless digital signature scheme. As early as Katalina Fiori 2008 in Messina, Libertiung, Papamantuta, Masia, and Triandopoulos, and then my high school student John Krusmal and I in 2018. Also, we started investigating this under the name vertical trees. So the idea the idea is as Chris pie card explain earlier, you have a higher branching factor in your tree, rather than binary, you go to carry. This implies a smaller height. And interestingly, of course it also implies fewer internal loads in your trees, which implies a lower overhead on the database that you store the tree in. So if you, in addition to using a higher branching factor you commit to the children using the constant size VC, then you can actually get lucky and size proofs because you no longer have to give the siblings along the path. So that's very nice and I'll show you in a second how all of that works. So first of all, why don't you want to do a high area Merkel tree rather than a vertical tree why is that a bad idea. Again, if you want to prove something like H4. This is what the proof would look like you would be giving all of these siblings at every level. You have locked KN levels and K minus one siblings per level so the proof size would be K minus one lock KN, which is always greater than log two and which is what you would have in a binary tree. So if, if your goal is to optimize for proof size. Hi already Merkel doesn't make sense. But if your goal is for example to be database efficient, then I already Merkel might make sense if you use let's say case equal to 16. I think you get a proof size is not that much bigger it's like four times bigger or something like that. So it could could make sense as practice. Now if you use a high already vertical, which I'm depicting here. I can't obviously depict all of the sub trees in the tree so these are meant to be sub trees here. It would look kind of like this, and you would commit to each of the children using a vector commitment scheme in this fashion. So what would be the advantages of this of course you get lucky and depth as with the Merkel. But additionally what you can do because you've used a vector commitment is you can now compute these proofs for every child with respect to a parent so for example this proof argues that why seven is the seventh child of Z. There's a efficient API you can use for this and efficient algorithm and a lot of vector commitment scheme so so you can basically do a proof all call that in K log K time for for these K children, gives you all of these K proofs here. So this can be very efficient that's in product with at least, and you do this for every every note in the tree, right you can put proofs for every note in the tree. So even here even here even here and here. And as a result, in addition to committing to the tree using the vector commitment scheme, which will take you order and time, you're doing this extra and lock to K computation for getting these proofs. So you're doing extra computation in vertical trees. And this, it's all concretely much more expensive to this is much more expensive than computing hash functions, the vector commitments themselves are much more expensive. So you're giving up a lot of computation, just to get a smaller proof size and to get database friendliness. And what is a vertical proof look like for example if you want to prove position h4. What you do now is you no longer have to give the siblings right you just give the path with the proofs. So if you want to verify or can check, for example, that h4 is the fourth child of parent w3 using this proof by h4 that you gave, then you can check that w3 is the parent of why six at position sorry that w3 is the child of why six at position w3 using this proof. Same thing for why six with respect to Z. And the proof is locked in size. Which is much smaller than than the lock K minus one lock in in a miracle. And then another Vitalik Buterin from Ethereum pointed out that these proofs in a vertical tree can be cross aggregated into a single proof. So then actually what you would have to give are just these pink commitments along the path, and one group element for the cross aggregated proofs in schemes like KZG or Libertium or point proofs. So it can be quite efficient in practice that proof size can be concretely locked K and plus one group elements. So it's much smaller than Merkel trees. Of course there's no free lunch, if you want to update a vertical tree you're going to be doing a lot of work. So you have to update the commitments a lot the change path. So here you have to update the w3 to w2 prime by accounting for the change at position for in h4. Here you have to update why six by accounting for a change at position three in w3. You have to change the root by accounting for a change at why six right, but you're not really done you also have to update the proofs, because these proofs are no longer valid they all need to change the proofs along the pack. Really need to change right so, for example, this proof here at position eight, you need to update it to account for the change at position six in why six right and you need to do this for for all of the proofs on this level. So that would be the basis of this of this node here and for all of the proofs of this node. So that implies actually K log K and time for the proof updates, because you have locked in levels and at each level you're up in K proofs. It's the same asymptotically in a miracle tree because you need to rehash this parent and you have to do order K work to account for all of the children and have locked in levels, but it's concretely slower in a vertical, because the proof updates are much more expensive in a vertical. So if you look at vertical trees you have a bunch of advantages the proof sizes locked to K times smaller than in a in a miracle. They're reasonably fast in practice, their database friendly, they have public setup if you use the right the right vector commitment like bullet proofs, they're unstealable. They have some high proofs, but they have some disadvantages they're much slower than Merkel trees of course because you use internally, not a hash function but a case G vector commitment let's say they're not homomorphic unless you use these new techniques that Chris by card at all proposed. And they're not friendly to snarks or inner product arguments, either, and they even require trust a setup with other VCs like Casey G the vacuum, and so on. So the things here that are worth working on is efficiency. In terms of updating the vertical tree serving proofs fast, if you're not going to be maintaining proofs. Coming up with a sub vector proof for a vertical trees is super interesting open problem. And even better, can you aggregate person to sub vector proofs. What I'm going to do right now is something that Chris pointed in his stock. It's just because of this mapping of commitments to two vector elements that you have to do via hash function, which breaks any friendliness towards snarks or IPAs. And there was subsequent work on vertical trees by the folks at Ethereum they propose this bullet proof based verco. And again by Chris by card at all they propose the homomorphic lattice based vehicle tree. So, okay, so to conclude kind of the stock what is the most fantastic tree that we hope to get in that I encourage all of you to to to explore in your research. Well obviously it has to be maintainable because it's a tree, and I put this here because you can come up with some tree based constructions that have lock size proofs were actually not maintainable but the proofs are not easy to update. So it has to be maintainable. It has to have aggregation homomorphism is a good property to have for status with occurrences database friendliness is also a good property to have in those applications. Small proofs concretely speaking computationally efficient concrete is speaking constant size public parameters ideally. Of course other nice properties, if you can get post quantumness that would be great and still ability, perhaps, maybe compressible proofs this idea of taking a lock size proof and compressing it down to a constant size proof. Those will be nice things to have. And in addition to all of the open problems I mentioned in the previous talk here are a few concrete directions for future work that that I've been keeping in the back of my head, but never had enough cycles to work on. So first of all, can we get three basic is from the speed log assumptions, and there was an earlier talk today on the speed log assumptions and trees. So there's some progress in that, or from our say assumptions, and we have some ongoing work on the RSA assumptions line. In particular, we believe the catalog of your construction can be turned into a tree, but computationally it doesn't work out it's too efficient. So feel free to talk to me if you're interested in that. Second of all, can we build the authenticated multi point evaluation trees from different polynomial commitment schemes like dark polynomial commitment schemes in RSA groups. That could actually allow us to do proof aggregation name these because these polynomial commitments support a multiplicative homomorphism, in addition to an additive homomorphism. And if you have a multiplicative homomorphism in the MTS, you can aggregate as far as I can tell, but there are challenges there to vertical tree aggregation would be interesting to explore using chains and cycles of elliptic curves. And finally, vertical aggregation could also be explored by using this new homomorphism proposed by pie card at all, combined with the snark friendliness of the lattice based constructions. There's also some work on building VCs from subgroup hiding, but they're constant size VCs. So can you do VCs that are tree based from subgroup hiding, what kind of things can you get there so you'd be using composite order groups. They wouldn't be so efficient, but maybe new ideas can be sparked. Lastly, there's a connection with digital signature schemes and a lot of these vector commitment scheme. So Catalano Fiori has some connections with the water's digital signature scheme and KZG 10 is connected to Bonnet Boyan short signatures. In general it'd be interesting to think of whether there's a transformation here that gives a VC given a digital signature scheme, and what does that imply for tree based VCs. And lastly, I have this overview of all of the work is stable that kind of surveys the field. So I'm not going to present everything in it but I'll leave it here for you guys to look at. And I think with that, I want to conclude and take questions. Thank you so much. I hope you're all inspired to work on trees after this talk. Thank you, Aline. I don't see any questions in the chat. I have one question, one quick question when you talked about the dark. So everybody said, one of the things about the dark point on a commitment is that we could do it even without the trust to set up with the, with that other assumption like the the class groups right. How efficient is that, compared to our side you know, or the class group. Yeah, there are implementations out there I think she, which is a cryptocurrency company implemented class groups. I remember asking some folks at some point and they were not too concerned about the inefficiency I think it was even a claim that it's roughly on the same order of efficiency. But I suspect it's a little bit less efficient. The complication with class groups is that usually like the assumptions are a bit different when you instantiate these things from class groups so there are things that don't hold in class groups that hold in RSA groups and you can run into problems. So you have to be really careful when you use class groups. But yeah, to avoid the trust to set up in that part of the commitment you probably want to use class groups and you might lose even more efficiency in addition to RSA, and you might be to be, you might need to be extra careful when proving security of your scheme. I mean, this is the idea. I have a question about the your vertical aggregation question. So, like in the, I mean, when you do build this vertical tree. So even if there is a way to aggregate the openings along the path. I mean it seems to me that there is an inherent problem that you should provide commitments and this might not be aggregatable and you know that's right. Are you do you think that this is solvable or yeah. Yeah, so I claim this is solvable. I mean, I claim. I want this to be solvable. So you're right like you can aggregate these proofs, but what about the commitments. Right. So that's what what Vitalik pointed out, let's let's cross aggregate these. We still have to send the commitments. Now, if you think about these new techniques by pie card at all. They, they do not need to. What's the problem, the problem is that these are group elements, you have to hash them into a field element so that you can compute the commitment here at Z. So this is not really why one I'm kind of lying here, this would be hash of why one here. And that's where you lose the start friendliness or the idea friendliness and that's where you kind of lose the ability to aggregate the commitments themselves efficiently. But but pie card at all showed that that if you use lattices, then you don't really need to hash right you actually get a homomorphic vertical tree, which I claim that could be much more friendly for aggregation I don't know how but I have a, I have a hunch. So, so that's the idea. And of course it's going to be lattice based which we don't know how to make suit to efficient practice. But hey, you know, at least we can aggregate vertical proofs and we'll let other people instantiate the lattice stuff more more efficiently. So that would be the second, the first idea with vertical aggregation it would be sorry in this future work here, vertical aggregation the pps 21 homomorphism and snarks or IPAs I should say right. Something like that. But the other idea would be, you know, can you use some chain cycle of elliptic curve thing to remove this need of of hashing the group elements to the field element so like when I if if when I compute the vertical here. What if I don't need to hash these group elements, because I can maybe use their x coordinate, which is a field element, and just use that as as the input to the vector commitment on the next level which uses a chain of elliptic curve something like that. It's actually something that we have explored a little bit, but we didn't see the advantages because even if you can do use the cycles like the way you encode things still break the homomorphic properties of the let's say the other. The way you encode things you mean. So let's say that you know you want to commit to group elements. Right, so you want to complete commit to group elements that are each group element is actually a pair of field elements that are in your message space, but then the problem is that I mean you can do it right and these probably avoid to make an expensive ashing. But many other properties like breakdown, for example, you lose any homomorphic property of the commitment because committing to the points of the elliptic curve. I mean, let's say adding over the field the pairs of elliptic curve points does not correspond to the group operation, for example. Maybe from an engineering perspective of analyzing, you know, avoiding ashing from groups to fill maybe this could make sense. Right. Yes. Yeah, unfortunately I don't I'm not a master in elliptic curves so if I knew more I could think a little bit about like. I suspect that there's still creative things we can do here by leveraging some some some chaining some cycling of that sort. But that's why I'm leaving it as an open problem. Thanks. Thank you. I see that this is a question. Great talk. Lots of really cool stuff to think about. I was curious about the techniques involving these like repeat these, you know, sequential division by polynomials either linear or higher degree and then moving to lower degree. So any way in which the operations here are commuted or commuting. So, you know, divide by x minus one and then divide by x two minus zero or something. Are these these commute in a way. I wasn't able to work it out into my mind. Let me bring up the slide just so. So, what what exactly is the question about commuting so what do you think should commute. Well, I guess, yeah, it's not entirely well posed question but suppose I, you know, here I'm dividing by x three minus something and then I'm dividing by x two minus something. Yes, and suppose I flip, you know, do it in the exit order. The question really is, is this actually a tree, or is it more like a tensor where the order, you know, where you can maybe do do things in any order right or permute the order in which you do these and is there any advantage to that. Yeah, so they do commute. So you could, I don't think so you could compute this tree by starting with x one minus x one minus one, then doing x two at this level and x three at that level, you could do it that way to it would be a different tree, but it would still have security. So, so does that for example, answer your question. Yeah, I guess I'm what I'm trying to get at is whether. Yeah, I mean the order in which you do these is arbitrary, you know, for each at each level, but it's more that, you know, if I divide by. So if I think about the doing two steps here, like dividing by x three minus zero and then by x two minus zero. Okay, that's dividing by x two x three, right, and the order, you know, x two and x three commute. The overall quotient will be the same, regardless of which, you know, whether I do two first and then three or three first and then two. And the individual, the individual remainders, maybe won't be the same. But the quote the ultimate quotient will be the same. Right. So I actually I believe that if you divide by extra minus zero first to get, let's say q three, and then you divide that remainder by x two minus zero to get q two, let's say, if you were to flip if you were to start with x two minus zero, you'd get a q two prime. You wouldn't get the same q two that you got here. For sure but then after I divide wrong but then after I divide by x three minus zero I'll get the same q two three, let's say. The same. If it's a quotient. So in this order, like you get q three q two. And if you flip I think you get q two prime q three prime. Right, but what I'm I guess what I'm trying to see is the quotient after, sorry, so you take the remain. Maybe I misunderstand the first step so I get a quotient and a remainder from the first step. And then I take. Oh, I remainder and divide by x two. Yes. Yes. Yes. I see. Okay, not the quotient divided by x two. Yeah. What I was thinking is if you divide each quotient by the next thing then they kind of commute. Okay, right. Yeah, so there's some intrinsic order here. Yes. Hmm. I'm wondering, is there a way to transform, you know, if you change the order of these things, can you more efficiently transform the tree to the to its, you know, prime values, right, to its updated values. Is there any use of that to it's how do you mean update the tree to its updated value. Well, like you said, you know, if I do x two first and then x three I get a q q prime to and then a q prime three. So, if I if I knew q two and q three, could I then efficiently transform them to to three prime q two prime something like that. Is there some use. It gives me the feeling that there's no inherent real order here and that right somewhat somewhat arbitrary and so yes, maybe this is like a tree where you can access any level cheaply or something like that, which might be useful. Hmm. Interesting. I put in the chat a reference to the paper that that first explored this multi linear polynomial commitments and these repeated divisions for everyone. What can I say something. Sure. Are you done or sorry. Yeah, that's all I was interested in thank you. No, so I want to say that I thought about something like what preset in a different context. And so I for update proving for updating proves I'm not sure because like you already pre computed them in some order so I don't know how this would help. But what I was thinking is for subset openings. If you open certain positions then you can build maybe the tree in a specific order so that like you can group some of the openings in a certain way or something like that. Yes, yes. Yeah, indeed like, I think, I don't think we prove this but I think like if you just gave this quotient and this quotient that could serve as a subject or proof for these positions. We haven't written the security reduction for that but I think that should work. I think that works in amp is at least I'm not sure about hyper proofs, but they're quite analogous. So then, to what extent, I don't think, I don't know if you could reorder these in any arbitrary way though. Like for example I don't know how you could put this leaf next to this leaf when their divisions are by different things. Well, the way I think about these trees usually is like, you have leaves, and you number them in binary. And then, yeah, like, not like they, they are labeled in by a lawyer. Well, they have a label of binary beats. And then like what I was thinking is something like you want to open up positions. And you look at which coordinates they coincide. And then you could like somehow build a tree so that the maximum number is like grouped in a certain way. But but then like if you like the proofs are already pre computed then I think like somehow this doesn't help. Like if you had to compute them on the fly like you could do something. Yes, we've had a lot of difficulty coming up with aggregation for these two construction hyper person MPs without back boxes like IPA and snacks. My only hope is that that for the empty construction, we can take a proof and turn it into a KZG proof that would be amazing, but I, it doesn't seem to it doesn't seem possible it seems to require a homomorph, a multiplicative homomorphism and if you read the post that I have that I have here, you can kind of tell how how you can turn an MP proof into a KZG proof although I don't spell it out specifically. But they're quite related algebraically. It's just that you seem to need a multiplicative homomorphism. Maybe there's other things you can do. Thanks for all the questions everyone by the way.