 Thank you Rosario for the introduction and for the invitation today. So I'd like to tell you about vector commitments and functional commitments from Gladys' and this is work with my students Zachary Puppin and Chad Sharpe at Michigan. So the topic of the day of course is vector commitments. So it's a very brief reminder. The idea behind a vector commitment is that you should be able to take some large amount of data here represented as a vector of D entries and one through MD, and then put it through a commitment procedure to get some much smaller commitment C. And the idea behind the functionality behind vector commitments is that after you've committed to this large amount of data as a small commitment, you can then open individual entries of it. So when you want to open the ith entry of the vector, you put it through this open procedure and it will produce a proof piece of eye. And that proof piece of eye is intended as the proof for the ith entry of the data. That is the ith entry of the data is what you set it as. And then you can provide this proof along with the original commitment to a verifier. And so by providing the original commitment, the proof and the claims entry M sub I to the verifier, the verifier will either accept and say yeah this all looks good I believe that the ith entry is M sub I or you know declare that no I'm not convinced of that fact. The main security property that we asked for is position binding, which says that it should be infeasible for an attacker to create some commitment, and then open it in two different ways at the same position. So it should be infeasible to come up with, you know, two different messages mi and mi prime that and also valid proofs for both of those messages at the same index I for the same commitment C. So the basic functionality of vector commitments and note that this doesn't have an articulated any hiding property so we don't explicitly ask that this see hide any information about the message although that can easily be added kind of after the fact. So we'll mostly be looking at this binding property today. functionality or efficiency feature that you might look for is something called update ability or even stateless updates. An idea here is that you'd like to be able if the message data changes to update the commitment and even the proofs without having to kind of rerun the entire procedure the commitment procedure and the proof procedures over from scratch, but rather just do some updates to the existing commitment and the existing proofs that you already have in your, your hands. So the syntax would look something like this if we have an original commitment. And then we get some position j changes, and it changes by some amount delta so it increases maybe or decreases, and you can feed these these three values into the update commitment procedure and get the updated C prime here. So C prime would then act as a commitment to this modified message vector, where just the jth entry has been updated by adding delta to it. And then similarly for proofs, we can say that if entry j changes by adding delta to it, then, and we have a valid proof of position I, in the past, then we can update that proof to some PI prime, which will still verify it will still show that position I is what you say it is, but now relative to this new updated commitment C prime. Okay, so these stateless updates are a nice feature to have. It means that you don't necessarily the clients don't necessarily have to keep the entire data around, but can rather update their concise, you know, local commitments and proofs instead. Finally, the last concept I'll introduce to something that probably haven't seen as much today but maybe a little bit, as this idea of functional commitments, and the setup here is very much the same as what we've had before, except that instead of opening an entry I, you can do something much more flexible, much more powerful, which is that you can open some arbitrary function of the message data as a whole. So instead of just opening a single entry of the message, you can apply some function to the entire data. Let's call that function F. And after opening the value of F you get some proof piece of I as before maybe we should call it piece of F. And then the verifier is given the original commitment is proof and a claim that the value of F of M equals why, and the verifier decides, you know, am I convinced by that, or not. Okay, so the vector commitment is a special case of this where the functions F are merely the selection selector functions right they select the ith entry from the entire vector. You can imagine much more rich classes of functions such as inner products on the vector or Boolean arbitrary Boolean functions on the vector, for example. Okay, so those are the three concepts that we'll be looking at today the vector commitments, stateless update ability and functional commitments. So, oh yeah, I should mention the security property here is very similar to before it's just the natural thing it should be infeasible to open the same commitment for function F in two different ways or to have two different vectors right so you can't, you can't convince the verifier that F of M equals why, and it also equals F of M also equals y prime for some difference, y prime. Okay, so some selected highlights of prior work on vector commitments. Of course, you know, Merkel trees are implicitly the very first examples of my factor commitments. They allow you to do proofs in size logarithmic in the dimension of the vector right so D all throughout the talk will be the dimension or a number of entries in the data vector. But one big drawback of Merkel trees is that they're not they don't have this stateless update ability, as far as we can, can see. So you have to actually have the original data in order to update the commitment when something in the data changes, if you use all the original data to do so. Now there are examples of statelessly updateable vector commitments. And in fact they even have smaller proofs asymptotically the proof size does not grow with the dimension D. These are based on more algebraic assumptions like RSA and parents. And, but both of these problems are both of these assumptions are sort of quantum breakable right in a quantum world, we would be able to break all of these example, you know problems are saying pairings. Now, there is one, as far as we know just one prior example of a vector commitment that is post quantum apart from the Merkel trees. And it's kind of Merkel ish it's it's inspired by Merkel trees but it actually gets stateless update ability. And it's based on this post quantum. SIS assumption which I'll, I'll state in a couple of slides. This is work by Papa Montu at all in in 2013. So it worked kind of like Merkel trees but it manages to still have the stateless update ability property and also post quantum security. So many applications of these kinds of verify sorry vector commitments from outsourcing of storage to zero knowledge sets accumulators credentials and cryptocurrencies of course. So lots of nice applications of these things. And we'd like to have more constructions and a wider variety of efficiency and functionality choices. And then moving to functional commitments. There is also a good deal of work in this but it is limited enough in the following way so the initial work on functional commitments managed to give functional commitments for linear functions. So basically inner products of the data with some vector that you that you desire. And that was using pairing type assumptions. So the recent work from, I guess a year and a half ago or so extending to what they called sparse polynomials. But actually these sparse polynomials are still what we do call linearizable we consider them to be linearizable, meaning you take your message data. And if you pre process it in advance and then commit to some kind of expanded version of the data. Then the functions you can apply to that pre process data are still limited to linear functions. So you can kind of get low degree polynomials by sort of taking combinations of monomials of the of the original data. So this is still kind of linearizable inherently. And so we're limited to linear functions in a fundamental way. Now, if we want to go beyond linearizable functions. There's only one prior construction. We know it's using snarks. So it's pretty heavyweight tool snarks for NP. And most importantly snarks cannot be constructed with a black box security proof based on false viable assumptions. This is a result of gentry and Wix from about a decade ago. So in order to go beyond linearizability in your functional commitments, you, you know, at this point you need to use a much heavier hammer and much kind of more, let's say, non standard type assumptions. But we'd really like to have these there are many applications of functional commitments, such as these listed here and probably others that we're not aware of, or that we can't get imagine. Okay, so that's what that's kind of what the state of the art was at the time of our work. And let me now tell you about our contributions in this work which is in, it was in TCC last fall. So we give a new post quantum vector commitments. It's also based on SIS assumption. And it is statelessly updatable, just like the prior work, but it has significantly shorter proofs than the prior work of Pokemon to at all. In particular we get on the order of linear factor improvement in the length of the proof so linear in the dimension of the of the vector. So our proofs are going to be poly logarithmic length in the, the length of the data rather than super linear as in the prior work. And then we're also significant secondary result second result here is that we give functional commitments that are also based on SIS for arbitrary Boolean circuits of some priori a priori bounded size. Say like, end to the 10th sized Boolean circuits, and we can give you a functional commitment scheme that supports all such Boolean circuits. And a few remarks about this result this is really, as I was mentioning before, this is the first result that goes beyond linearizable functions will still be being based on a falsifiable assumption. Namely the standard short integer solution lattice problem. So it's it's based on a well studied and in particular falsifiable assumption, and as the first thing to get beyond linear functions for functional commitments. Secondly, it's the first post quantum functional commitment at all beyond vector commitments of course. And from a false viable assumption. So, we did not have any post quantum even linear functional commitments prior to this work. And in this in our work we actually focus on the special case of linear linear functions and give a particularly more efficient instantiation of them as compared to the arbitrary Boolean circuits. And the caveat or thing to note about this result is that it's working in a new model that we introduce in which there's an online authority. So there's an authority that you go to, to ask, you say I want to, you know, prove something about function f I want to prove something about data, you have to go to the authority and say hello authority please give me what we call an opening key for this function f. And then the authority will publicly publish a, what's what's called the opening key for that and you use that to prove that indeed f of m equals what you say it is. And that key that opening key is reusable so it's not tied to you, it can be used by anybody else who wants to prove, you know the value of f on their own data. And it, you know, it can be made public and is not tied to any particular person, but you do need the authority to generate this, this opening key at some point, before you are able to prove anything about f of your data. So I think this is an interesting new model. And, you know, it would be great to consider whether that can be relaxed or removed in some way. But this is what we require so far. And then there are a couple of secondary contributions which I won't really talk about today. First we're going to, we give a formal definition and a generic construction of what we call zero knowledge vector commitments. In particular this implies hiding of the data that you commit to, but also when you introduce things like updates, updating proofs updating commitments. There are, you know, there may be those update values may somehow introduce leakage about the data and so we, we consider all of all of the information and give a construction and analysis of zero knowledge something that zero knowledge, you know, kind of in in totality. And we also give a formal analysis of what seems to be a folklore transformation on vector commitments that uses, it's kind of like a Merkle tree but use a larger airity you don't use a binary tree you can use a larger branching factor in the tree, and it's been in the trees and in the, in the literature. And we give a formal analysis of this. It's a pretty natural idea, but I don't think it had been carefully analyzed before, especially with respect to updates, and so forth. So that's my conclusions, but we'll focus today on point one. And if there's time I'll give highlights of of point two as well, and maybe a couple remarks about this secondary number two here. Are there any questions at this point before we kind of go into a little more technical detail. So I have a question about the distinction between, between this improved lattice construction relative to PSTY 13 will you will you clarify later that the caveats like the trusted setup, the public parameter sizes. Yeah, we have a table of a lot of a lot of different things. Yeah. Thank you. In fact, it's here. Okay, so let's go to that one. So, so just to compare in more detail the. So, two tables on this slide. One is just looking at sort of, it's not a formal definition but things that kind of don't use a tree structure, but are more like just base vector commitments, and that are statelessly updatable. So the first three rows of here are, you know, prior works, and not all of them but the sort of the most notable for comparison purposes. So based on RSA and then CDH with pairing friendly groups and then Q type assumptions. So here we're looking at the size of the public parameters, the size of the commitment, the size of a proof, what type of setup it has, and whether it's post quantum. And so you can see here that we have parameters which are on the larger size. So, on the larger side, they are D squared, or D again is the dimension of the data that you're committing to. So the prior works had had also D squared but others are D order D. Our, our, our commitment size is logarithmic in D, as opposed to constant and likewise for the proof size. And the reason for the log D is not really the same as why it's log D and Merkel trees for example it has more to do with the accumulation of small, small entries, but we'll talk about that later. So we have a private setup just like all the prior works here but we are post quantum, whereas all these can be broken by quantum computers. So Chris, if you were to put the PSU I 13 in this table, the public parameters would be of size D, and the proof size would be log D or log squared. Yeah, here's the, here's the comparison to PST why. So PST why also give a post quantum, statelessly updatable lecture commitment. And here we're comparing PST wise kind of a tree type construction. And so if we plug our tree. We also give a kind of a specialized tree type construction, based on SIS. We can kind of compare apples to apples between the two. Here's what you get so here the tree height is H, and the airity or the the branching factor of the tree is D. Okay, so if you use the same tree structure between the two works PST why and ours. These are the aspects that you get. So PST why the public parameters to be h squared D, where h squared D squared. We kind of are worse by a factor of D in the size of the public parameters. The commitment sizes are the same asymptotically, but the proof sizes are much better in our case we, we lose a factor of D compared to the prior work. So it's like a win in that you, you have a factor of D larger public parameters, those are fixed and set for all time, but then every proof that you produce is a D factor smaller. So it seems like a worthwhile trade off. But it does come at the expense of our set up being private. So this is a private key setup, meaning that you need an authority to generate some secrets, and then produce the public parameters, and then go away, right or destroy their, their secrets that they know, whereas the PS UI has a public setup meaning this kind of public coin, you just need a bunch of random public coins and that's what the public parameters consist of. And both of these are based on a post quantum assumption, of course. And here, the D in the second table is the tree arity whereas the D in the first table is the size of that. Okay. Yeah, right. So this is a non tree, these top table is like non tree base construction so they're just sort of atomic or, you know, on their without any extra tree structure. So that there the that's the sort of total degree, so sorry total dimension of your data. And then here, down here, the total dimension of your data is D to the H, because you're working with a height H tree with branching factor D. So, you know, there's a lot of different ways you can balance these things if you have very high dimensional data, you might use a tree with, you know, moderately high height, so that you can bring little D down. But if you have moderately sized data, maybe you just directly use this or you use something with height two or three something like that, down here. So the kind of rule of thumb that you would use is, I would that I would say would be to use the largest D you can subject to the parameters, you know, still fitting within your, your restrictions, because larger D is, is sort of better throughout here right you know, this is poly and H cubed but it's log squared D so if you push, push everything to a larger D, you're actually improving, but you're, you're lowering H and actually improving the sizes of the commitment and the proof here. So, so this second tree construction from the second table is the vertical like construction right. No, the vertical, the vertical construction is a generic one is not, it does not give you stateless updates. So you lose, you lose stateless updates. What we have in the paper and it's sort of qualitatively similar to ps2 y is a specialized tree type construction, which still preserves stateless update ability. But either way, like this construction, the proof size doesn't depend on the arity, which is basically a vertical like this last construction in this table, second table here, log squared, h cubed log squared D. Yeah, I guess it's only logarithmic, which is nice. You still have the vertical like construction on top of this, which is correct. Yeah, so, so you can, I think if you do the vertical construction. It's no longer h cubed it's like H and then log D. But you lose up, you lose stateless updates. So, we're paying for stateless updates by a kind of a larger exponents on the H and on the log D in this construction. If you do the vertical things you'll get something like H log D as your proof size, but without, without stateless updates. Any questions? Anything. Yeah. Yeah, so there's a question in the chat, and then I have a question. So the wisdom the chat says that authority that you have for the functional commitment be shared among many parties, like a trusted setup using NPC. Yeah, great question. So, can the authority be shared among many parties yes, in the sense that, if I have data and you have data and he has data and she has data, we can all use the same opening keys that the authority produces. So, I may be the first one to go to the authority and say hello I would like to open my data under function F. And then the authority will just say okay I'll create an opening key for F, and I'll just publish it I could put it on the blockchain or something. And then the now or in the future, who wants to also prove F on their data their own data can use that same opening key, and they don't have to go back to the authority. But then if you for example want to prove function G on your data, you'll need to go to the authority and say okay please give me the key for function G that would be made public, and then the whole world can use the opening key for function G. And it's a little bit of a kind of a hybrid between like, you know, identity based encryption where you have, you have to go to authority and authority, but the key that it gives you is specific to you and has to be kept secret. Here the opening keys are public and can be used by anybody. Right. And the follow up question wise. If this process could be implemented like an NPC. Sure, yes, I mean any any trust authority could be could be distributed you know as a as an NPC. So, I have a question. So, and I, my internet went off for a minute so you might have addressed it while I was on. So any functional commitment would give you a snark, right. It's a good question. Yeah, I, I, I wonder about it. I haven't seen an example earlier, right. Yeah. My intuition says yes I have not like written it down and formalized it enough to fully convince myself of this but yeah you there may be something to this. So, is this is the model of the authority is what would allow you to bypass the gentry weeks result. Yeah, that my intuition is that there's something to that, but I haven't been able to write it down carefully but as you say if indeed a functional commitment really gives you a snark, then this, you know relaxed authority model that we give you know the exact thing that is needed to get around the gentry weeks. Isn't this relaxed up this authority model that you have it isn't somehow related to the per function trust to set up that we already have in some snarks like cross 16 or qap stuff. I'm not sure I guess I don't know the details of those well enough to, to say for sure but those are per function right so here we have an authority who will give you as many opening keys as you want, and they all work with respect to the same public parameters. Okay, yeah so that could be an issue. Okay, so they're not exactly the same. Okay, great. Okay. Okay, so we'll dive into a little bit the technicals and I just want to define the assumption that we use throughout the work. It's called short integer solution. And it has a few parameters parameters aren't too important, but I want to write them down for for completeness. So it has two dimensions and and m, and then modulus q. And the problem can be stated quite simply as I give you a uniformly random matrix a and everything in blue so the color coding is important here everything in blue is uniformly random. This is a uniformly random matrix a it's n rows by m columns, and the entries are uniformly random mod q. The goal is to find a non zero short vector in the kernel of a. So, also the color coding here everything in red will mean short. Okay, so short is a qualitative notion, but think of short as like the Euclidean norm of x being much less than q. Think of maybe binary entries or entries between, you know, plus minus 10 something like that. So you're looking for an entry in the kernel of a with small sorry vector in the kernel of a with small entries. And this ax equals zero mod q. Okay, so this is a very standard and well studied lattice assumption by this point. And the reason it's so nice to work with, apart from its syntactic simplicity is that it also has great worst case to average case reduction, which was originally in 96 and many fellow perks have improved this. But the nature of what this reduction says is that if you can solve s is that is if you can find non zero vector short vector x, whose norm is much smaller than q, then for a uniformly random matrix a and you can convert that solver into an algorithm that solves short vector problems on any n dimensional lattice that is in the worst case. So up here we're talking about a cryptographic problem where a is chosen uniformly at random, but breaking this problem implies being able to solve hard lattice problems on any lattice of dimension and, and there's no randomness around here this is a worst case type problem. So this gives us a good, you know, good evidence and good feeling that indeed, this is a hard average case problem. SIS is, and that's, that's also backed up by, you know, crypt analysis and study. Just a question about the second, the sort of a problem, what do we do know about the complexity of this like a complexity class of the short vector in. Yeah, right. So these, these short vector problems, they're unlikely to be NP hard. They are in NP intersect co NP, because of approximation factors they carry. But they're, you know, they're, they're very well studied problems. You know, polynomial approximation factors. Whereas, you know, the best polynomial time attacks we have only get exponential or slightly sub exponential factors. So we're, we're very far, you know, the best algorithms we have for these problems are essentially exponential time in the dimension of the lattice. Good. So this is the problem that we'll be using. And again, the problem is just given a random matrix a find a short vector in its kernel short non zero vector in its kernel. So now we can describe the vector commitment scheme. And first we'll start with the setup so this is what the authority does to generate the public parameters. And really this is where all the magic happens the rest of the scheme is really quite easy and simple once you, you've got this set up here. So it works as follows. So D again is our dimension of the data. And we're going to say for every eye from from one to D. So that's what this bracket D notation means it means one through D. We're going to sample a random matrix a together with what's known as a trap door. And I won't describe how that works. But this is kind of a commodity type algorithm now, given given the works of the prior 15 years or so. So we can generate this with a trap door it's kind of analogous to like choosing an RSA modulus together with its factorization. We do that. We sample all these matrices AI with their trap doors and then also for J one through D, we're going to sample a uniformly random vector use of J. And then the interesting part happens, we're going to say for all distinct I and J so I not equal to J. We're going to use the trap door or trap door for a sub I to do what's called pre image sampling we're going to pre image sample a short vector R I J so that AI times R I J equals U J. Okay, so we have these kind of roughly D squared short vectors R I J and that's where the quadratic D squared size of the public parameters comes from. Okay, so we use the trap door to sample these sample these pre images or these, these vectors R I J note that they're short so they're in red. So we're going to output all these quantities as the public parameters. So if you don't like so many equations here's a nice pictorial way to view what we've done. It's a kind of a big system of equations are a big matrix system here. So if you imagine on the left we've put these a sub I matrices down the diagonal. On the right, we put you one fill up the first column with all these copies of you one second column with copies of you to and deep column with copies of you D, except that we punch a hole in the diagonals so we put zeros down the diagonal. We, we have this matrix big R, which just has all these basically D times D minus one vectors R I J in them. Okay, so R 21 or R 12 R 22. I'm sorry, R 12 R 13 up to R 1 D, and so forth, throughout filling in the, the upper and lower triangle of this matrix again with zeros down the diagonal. So you can kind of see that this works out if you take this top row times the first column, you end up with zero because the a one matches up with the zero, and these zeros all match up with the ours, so you end up with the zero in the diagonal. But if you take this first row times the second column, you end up with a one times R 12, and then a bunch of zeros. So a one times R 12 equals you to, and so forth. If you look at the second row times the first column, you get this a two times R 21 everything else vanishes. So a two times R 21 equals you one, and so forth. Okay, so this is sort of the picture of how the public parameters are set up. And you'll maybe see my in the background. So that's the setup. And now, let's talk about how the commitment works. So we have these public parameters. And we want to commit to a message. Let's say that the message is just binary. So a D dimensional binary vector is not really necessary but it's the simplest way to describe things. So all we do here is we take the message, and we use these zero ones to kind of do a subset some over these uj vectors. So we should just sum up uj MJ. And if you feel the u vectors as columns of a big matrix you, then it's just you times them. Okay, and that's our, that's our commitment. So very simple just just a matrix times of the data vector. So if we want to open, we do something kind of interesting so we want to open the ith entry of our data. We're going to output a proof PI, which will just be like the ith row of our big our matrix. So that ith row or ith block of rows really times our matrix times our vector M. So if you remember the ith row looks something like this right it's got all these R ij vectors in them but punctured on the diagonal. And we hit that with M. And so all together this is just the sum over R ij times MJ, overall J not equal to I right the the not equal comes from the fact that this is zero is here that kind of vanishes or knocks out M sub I. Okay, so that's how we open and observe that because this our matrix is made of short vectors and M is a short vector, then altogether. This combination is also relatively short. So the proof is a short vector. So let's see how to verify basically verification, just sort of plugs in the missing R ij. Sorry, the missing R ii times MI piece, but it does so in the in the in the range rather, rather than the domain. The rule is, we accept proof PI, if it's a sufficiently short vector, and if AI times PI plus UI times MI equals C. Alright, so whether if this equals the original commitment to the data. So let's see why this works. Let's just write out let's expand out this left hand side. The right hand side AI times PI is just AI times R I am right by definition appear. And then we add UI MI. And now, expanding out the definition of R I am. It's this summation. And we can bring the AI inside the summation so we have some over J not equal to I of AI R ij MJ, and then this additional additive term. And then we just note that by definition AI R ij equals UJ. Okay, so altogether, we have the sum overall J not equal to I of UJ MJ, and then we have the missing piece UI MI, and by definition that is the commitment. Okay, so this is just why proofs work right this is this is why when you open a commitment properly, the proof actually verifies. Any question about about this. It's just kind of a lot of wrangling equations. It's kind of a, there's some similarities here to cut a lot of your in some sense. For sure. Yeah. Yeah, yeah, lots of them I mean analogously it's sort of a sort of a hybrid between the RSA construction and the pairing construction it has elements of of each of them. And is not a direct analogy of, of either one individually. That's kind of me. That's cool. I think when you do a tree based construction with this, it might be interesting to think of aggregation if it has parallel to cut a lot of theory, we've had some thoughts there. Yeah, yeah. So we can do some. Easy. There's some easy things to aggregate and there's some hard things to aggregate in this construction, but the easy things are not the interesting ones I think the. Yeah, I can say more about that at the end. Let me just describe the updates real quickly so this scheme is statelessly updateable and the key behind that is the fact that the commitment function is really just as linear function that's you take, take the public parameter you and multiply it by your function is linear. And again, and also the opening function is linear so you just take the ice block of rows from the public matrix are and hit it with your message and so the updates are really easy. If you want to change entry J by Delta, you just kind of add you J times Delta to your commitment. And it works by linearity. You can do anything with the proof. So, why this is correct simple exercise for the viewer, but you know the hint is what I've just said, it's a linear. In fact, if you do a commitment and then an update, you get exactly the same output as if you had done a commitment on the modified message to begin with. This is pretty simple. I'll just say a few words about the security I think there's a kind of a neat security argument here so we have a theorem which says that breaking the position binding of this construction is at least as hard as solving sis. It's the flavor of, of why that's true. So suppose that we have some adversary that, you know, tries to break or can break position binding with what this means is it outputs some commitment to see star maliciously constructed maybe it's it's up to the adversary to produce it. Some index I and some two different entries and my and my prime and valid proofs for both of them. So if you just write down the verification equation, it says this it says that since both of these proofs verify C star is AI PI plus you I am I that's the verification condition, and also it's AI PI prime plus you I am I prime. So if you kind of just gather these things together, what it tells you is that this matrix AI with an extra UI column tacked on times this vector, which is the difference of the proofs, and then the difference of the message entries tacked on to the to the last entry equals zero. So this is all this is, it seems like it does right we've got this uniformly random matrix here. Okay, it's kind of broken up into two pieces but the whole thing is uniformly random. This vector here is non zero, because MI is different from my prime so this entry is non zero. So the whole vector is not zero. And it's short right we have read minus read, read minus read everything is relatively short. So it seems like this might solve sis are we done. Somebody say no. No, we're not done. And in fact this argument is doesn't really get us there. The problem is that we had to have generated the public parameters. And when we generated the public parameters. We had to use a trap door for every one of these AI matrices right we created AI with a trap door, but sis is not a hard problem if you know a trap door for AI. Okay, so, in other words, this would have solved sis this argument does, does, you know, actually create a you know is a valid solution to sis. If this AI UI matrix were given to us as a challenge externally as an sis instance. And the rest of the public parameters were still generated properly. Okay, so the challenge we have here is to be given AI and UI as a challenge instance, and then generate all the public parameters around it, and then invoke our adversary who breaks position binding. Okay, and if we can make that work and then then we'll have a real proper security proof here. So, so Chris is, is, is the problem the fact that along with the AI is you also revealed this our eyes that and the UJs that were were computed with a trap door. Yeah, you could see that as, as a reflection or a different view on the issue. You need to be able to generate all these public parameters all these RIs which are, you know, have these interesting relationships among them. And still, and do that, without knowing a trap door for for AI. Right. So that's, that's the issue. Got it. Okay, thanks. How am I on time. You have another 10 minutes. Okay, great. So I'll go over the, the, the kind of proof argument or have the structure of the argument and then wrap up with a few, few final thoughts. So our goal is again to be given this AI UI as a challenge, and then generate the public parameters that they look good, and then invoke the adversary, and hopefully the adversary gives us, you know, this information, which then we use to legitimately generate the public parameters of SIS. Okay, so here's, here's how it works. Let's suppose that the reduction, you know, is given a real SIS instance so uniformly random a star you star, and our goal is to generate good looking parameters. So I'll remind you of the structure of the public parameters. Down here on the left, we have this block diagonal matrix which is supposed to have a one a two through a D. So we're supposed to have you one through the entire first column, except for the zero, and then you to through the second column, and you D throughout the, the last column again except for the diagonal. And then here on the off diagonal entries, we want to have these short r i j's. Okay, so here's how we're going to do it well we'll kind of fill in this diagram piece by piece. So the first thing we do is we're going to guess the value i star. And we're just going to plug in a a sub i star as a star. So this is a guess about where we think the adversary will break position binding. So we just choose it randomly and guess it and plug in the external a star there, and also plug in you star at the same kind of position so here it's to right i stars to. And we've, we're on our way. Second thing we'll do is for all j is not equal to i star, we're going to sample a short ours, so our i star j's. So that kind of fills in this second row here. And then, once we fill in the second row you can look at what effect does that have well, it basically defines you one you three you four up to you D right so defines all the u j is not equal for j not equal to i star. So you just take this column here times this row, you end up with you one defined as you one, you take this last column times this row, you end up defining us of D. And remember, every column has these just copies of the same you are uj in it so you've got to fill in all these things here. So we filled in the entire right hand side. We filled in one entry here and one row here. So we do the rest, we do the rest in essentially the same way as the legitimate setup. So we're going to generate the remaining a supplies with knowledge of trap doors, just as in the real setup. So a one a three up to a D. We generate these with trap doors. And now the rest of it is just filling in the blanks in this our matrix. Okay, so the right hand side is fully specified. And so, for example, if you want to fill in this bottom bottom left entry. Well, you know that a sub D times this bottom entry should equal you sub one right so a sub D times our sub D comma one should equal you sub one need to do that for all of the remaining. RIGs right so there's there's all the first row the third row the fourth row and so forth, we can just fill in exactly as we did in the real setup. Okay, so now we filled in the entire system. And the theorem that you can prove is that these public parameters, you know, all together jointly are statistically close to what you get in the real setup. And this is using the pre image sampling technology and theorems, but basically it says the entire set of public parameters jointly look as if they were created in the real setup. And in particular the value of I star that we guessed is hidden. So we have about a one over D chance that when the adversary breaks position binding is actually doing so at position I star. And by the, you know, equations we looked at on the previous slide this tells us as an SI solution for our challenge instance here. Okay, so that's a high level view kind of skipping over a lot of grungy technical details but that's the general picture. Any, any questions about this. Cool. So, given the time, I think what I'll do is just briefly talk about the ideas behind kind of extending this to trees, and then wrap up with with a couple open questions. So, you know, we all know and love Merkel trees. This is the picture with arity to branching factor to and use a you can use an arbitrary collision resistant hash function to get a nice commit a vector commitment from this. So we have to move my vector commitments for dimension D, little D to the H. And usually we set little D to be to. And so this gives us, you know, constant size public parameters and commitments, independent of the dimension, but they're not statelessly updatable. So the proof size is something like H times D minus one, so you have to give all the siblings when you're when you're opening up an entry, there's a path from the leaf to the root, and you have to give all the siblings of every point in the path. So there are D minus one siblings at every level, their age levels. And so that's how many, you know, pieces of data you have to provide to in the proof, and this is more than a log B. So, one, like kind of very natural idea, if you know about Merkel trees, you know about vector commitments. Well, let's just, instead of a hash function, use a vector commitment. And we use a vector commitment at each level, or at each internal node to vector commit to all of our children. So this is gone by the name of vertical trees because like vector plus Merkel, basically. So that's a generic tree transform. And it allows you to do this. Using, you know, any tree of height H and already D, and you just need a vector commitment scheme for already D, little D, that is. So this gives you identical parameters identical commitments to the original base vector commitment. So this is still made sibling info right the whole point of a vector commitment is, there's a concise proof of any entry, you know, any child basically that you have. You don't have to give the sibling information to prove it. So you will now only need H proofs and commitments instead of H times D minus one to prove an entry at the leaf. So, in particular, you can use a larger branching factor D, and that say actually reduces H further so you can use kind of a shallower tree with a larger branching factor, and still have even smaller proofs now. But this does not preserve stateless update ability. So, what we give in the paper is a more specialized transform which is in the spirit of this idea. The key point is that it maintains the linearity of the commitment and the opening. So when you do this generic scheme, you know, you have to take the entries, all the children, put them into a vector commitment. Take all those parents put them into another vector commitment to take all those grandparents put them into a vector commitment and so forth, and that breaks the linearity in general, from one layer to the next. What we do is show how to not break that linearity. And so the overall commitment procedure is still linear in the leaf data, and also the opening of the proofs are linear. So this allows you to do stateless updates and still have similar efficiency properties as the the vertical tree. Okay, so this is again the same table that I that I flashed up before, but the savings it gives you is like a factor of D little D in the proof size. Okay, so I won't go into the functional commitments other than to put up this picture. This is showing the authority model, right, which is that when you want to open, you can't just directly provide the function that you want to open. But instead, you provide the function to the authority who then extracts an opening key. This opening key is what you provide to the open algorithm and and everything else is the same. Okay, so that's the model. I'll skip over how we construct these things, but it's a it's a kind of a different twist on usage of fully homomorphic commitments and fully homomorphic encryption. So skipping over that, leave with us some open problems. So probably the, you know, the most notable thing about a vector commitment is that our constructions are requiring a private coin setup at the authority has to run a procedure with private coins and generates trapdoors, and that data has to be destroyed or connected, you know, with an MPC. So are there post quantum vector commitments or functional commitments that have public setup with the same or better features and efficiency. That's a nice question. And then the question that we've already asked about or already mentioned is the functional commitment has this online authority to generate opening keys. So we get, you know, functional commitments for non linearizable functions with just an offline authority who sets things up and disappears right and you know as as Rosario mentioned there might be some inherent barriers here right like maybe functional commitments imply snarks for, you know, so suitable rich classes and so maybe there's a good reason why we have to have this online authority, some inherent reason why you can't go offline with it. And then third, for vector commitments, I, you know, we have post quantum vector commitments now but we and we have some not very interesting ways of combining them, but we don't have things like sub vector commitments, where you can open several bonds at kind of the size or cost of a single proof. We don't have aggregatable post quantum vector commitments. There's lots of other nice properties that people have shown for vector commitments in the pairing based world for example. And so it would be really interesting to see those kinds of things for post quantum vector commitments as well. Okay, that's all I have thanks for letting me go over a couple minutes. Thank you Chris. There may be one question. Oh, so where does the H to the three factor in the openings come from. Oh yeah so that's that's a great question. The proof size was like h cubed d log d squared right or log squared d. It has to do with the growth of the kind of the randomness or the well the growth of the coins or the growth of the message as you look at the proof is some big linear is some linear function of the message, and it kind of expands the message to a larger, larger norm, right so this message vector kind of gets blown up. And basically each stage of the tree kind of incurs a multiplicative factor and how much it gets blown up. So, we were wanting to be really honest about this and counting actual bit sizes. You know, to write down the integers that are in these vectors in the proof. You know the number of bits that you have to write down is this is growing. And then in order to because those grow the value of q has to grow and so you kind of get some, some nasty dependencies there. So it comes down to like the bit sizes of the numbers that are involved throughout the system. And maybe this is related. What's the dependence in the security parameter linear. Right. Well, let's see. What you have to do is, is take the sizes of the proofs. They're norms basically right there Euclidean norms, and then set parameters so that the s is problem is is hard for those for those norms. So I don't think you know concretely, the asymptotics are not terrible but they're they're not particularly attractive there as well. So there's relatively large s is parameters for some of these more fancy features.