 Hello, my name is Chad Sharp, and I'll be presenting vector and functional commitments from lattices by Chris Piker, Zachary Pepin, and myself. Vector commitment schemes are similar to commitment schemes that you may already be familiar with, except instead of committing to a single value, we commit to a vector of values and produce a concise commitment C. So concise meaning the size of the commitment is significantly smaller than the vector itself, so perhaps logarithmic or constant size with respect to the vector. And then when it comes time to open the vector, we open it at a position i and produce a proof, a piece of i. Then given the commitment, the proof piece of i and a message entry m sub i verify will return 1 if m sub i is the i-th entry of m. The security property of vector commitments is called position binding, which states that it should be infeasible to open a commitment at position i, at two different message entries m sub i, not equal to m sub i prime. An additional property one might want from vector commitment schemes is what we call stateless updates. So these are algorithms that allow you to update commitments and proofs to reflect changes in the underlying vector. So here our algorithm for updating commitments takes in the old commitment C, an index j, along with a delta between the old message entry mj and the new message entry mj prime. And it outputs a new commitment C prime to a new vector m prime, which is equal to the old vector with its j-th entry replaced by mj prime. By analogy, we have a similar algorithm for proofs that will take an old proof piece of i and the same j and delta and output a new proof piece of i prime that works for the new vector m prime and reflects the underlying change. Functional commitments can be viewed as a generalization of vector commitments, where rather than open taking an index, it takes a function f that's a member of some particular function family. Then when it comes time to verify, verify accepts if f of m is equal to y. And so I say that this is a generalization of vector commitments because you can view a vector commitment scheme as a functional commitment scheme where the family of functions are just the projections, the D projections if the vector is of length D, for example. The security property for functional commitments is called function binding, which states that it should be infeasible to open a commitment at a function f at two different outputs, y not equal to y prime. So as for prior work for vector commitments, one that you may be familiar with that seems to fit the bill for vector commitments are called Merkle trees. These provide logarithmically sized commitments and proofs for a vector of values, but these are not statelessly updateable. In order to update the commitment, you have to regenerate it essentially. Statelessly updateable VCs came onto the scene with a Y10 and CF-13 with constructions based on RSA pairings, etc. These had constant size proofs and constant size commitments. Additionally, they're Merkle-like statelessly updateable vector human schemes based on the short integer solution problem. And I say Merkle-like, it was a sort of tree construction, but it maintained statelessly, stateless abatability, whereas the Merkle trees did not. And there are many more schemes with many more different applications, including verifiable outsourcing of storage, verifiable zero-knowledge sets, cryptographic accumulators, pseudonymus credentials, and cryptocurrencies. As for functional commitment schemes, there are numerous. To highlight a few, the functional scheme for linear functions was proposed in LRY-16 based on pairings. Another functional scheme was proposed recently in LP-20 on a class of functions, the authors deemed sparse polynomials. But one common theme amongst all of the functional commitment schemes that have been created thus far, at least those based on falsifiable assumptions, has been that they only work for classes of functions that are called linearizable, which means that the functions in that class are linear with respect to some fixed preprocessing of the message M. We can go further than linearizable functions using Snarks for NP, but these cannot be constructed from falsifiable assumptions. So as far as falsifiable assumptions go, thus far we can only get linearizable functions. There are numerous applications to functional commitment schemes, including verifiable secret sharing, content extraction, signatures, and zero-knowledge Snarks. In this paper, we provide two main contributions, along with two secondary contributions. The two main contributions are as follows. We provide a new post-quantum scheme based on SAS that's statelessly updatable and has significantly shorter proof sizes than the only other one, which was PSTY. So to compare the two, in this talk I'll be presenting our base construction, but we also apply a tree transformation to it to make it more suitable for vectors of large arity. But here we compare it against PSTY with respect to a vector of size d to the h, and we note that we lose a factor of d in the proof sizes compared to PSTY. As it turns out, if you optimize d and h, what you end up with is that d should be some small polynomial, and so we're losing a small polynomial factor in proof sizes at the cost of our public parameters gain a small polynomial. This seems like a worthy trade-off since one assumes there would be many proofs for any set of public parameters. We also provide a new SAS-based functional commitment scheme for arbitrary bounded Boolean circuits. This goes way past linearizable functions. This is the first functional commitment scheme based on a falsifiable assumption to go past linearizable functions, and it's the first post-quantum functional commitment scheme from a falsifiable assumption. Our new FC scheme works in a new model in which the online authority that generated the setup that generated the public parameters generates reusable opening keys for any desired function. So it remains online after it generates the public parameters. We provide two secondary contributions, which I won't be talking about in this presentation. We provide a formal definition and construction of zero-knowledge vector commitment scheme, and we provide a formal analysis of a long-known, full-chlor, Merkle-like tree construction that adapts vector commitment schemes to work for vectors of very large arity. Our two schemes are based on the Schrodinger-Solution problem, so it's a brief reminder of what that is. Schrodinger-Solution problem says that given a uniformly random matrix A, the goal is to find a non-zero vector x such that A times x is equal to zero mod Q, and x is short, so it has length less than beta in this case. So our SAS-based vector commitment scheme is as follows. In order to generate the public parameters for vectors of arity D and security parameter N, we generate a uniformly random matrix U with columns U0 through UD minus 1. And then we use a technique called trapdoor pre-image sampling to first generate D matrices A i with trapdoors, and then for all distinct I j, we generate short Gaussian vectors R I j such that A i times R I j is equal to U j. And then our public parameters are all these A i's, R I j's, and U. To sort of view the relationship between these public parameters in a different way, here's a matrix equation which shows the relationship between them. To commit to a D bit message vector M, we simply multiply the message vector M by U and produce a commonsense C. To open this vector, we multiply the ith row of that R tilde matrix from before by M. So as a reminder of what that looks like, we multiply each entry of M by one of these R vectors and then sum them up. But the key thing to note here is that we sort of zero out the ith entry of M with this zero here. Then to verify, we simply accept if P i is sufficiently short and the commonsense C is equal to A i P i plus M i U i. And as for why this works, the key thing to recognize is that when we multiply A i by P i, which is the ith row of R times M, multiplying A i by these R's give us U's. And so what we end up with A i times P i is it's the sum of R i j times M j for all j not equal to i. Remember because we zero it out the ith entry of M. And once we add M i U i, it's as if we are multiplying the matrix U by M. And of course that's equal to the commonsense C. Now to update commitments and proofs, stateless updates sort of just fall out of the fact that the commitment function and the proof function are both linear themselves. And so we simply multiply our delta between the old entry and the new entry by the appropriate U j or R i j in the case of update P. And then add that to the old proof and commit respectively. Okay, moving on to functional commitments with authority. So this looks very much like the diagram we saw for functional commitments before, except now remember that the authority that generated the public parameters remains online permanently. And in order to use open, you must first get an opening key from extract and this opening key is public. You can imagine it being on a public bulletin board. It's also reusable. And so we get the opening key from extract and then we can use that in open. So the base of our functional commitment scheme are what are called homomorphic commitments. So these were first introduced in GSW 13. So this says that a commitment to X with randomness R under a public matrix A is equal to AR plus encode X. A is just some public matrix. R is sufficiently short randomness. And the key point here is that the scheme has additive and multiplicative homomorphisms. And so if we want to emulate Boolean functions, arbitrary Boolean functions, addition and multiplication are of course enough because we get AND and OR gates. We can get NAND gates from that, et cetera. So given these homomorphisms, we can construct this algorithm eval, which takes a function F, a commitment C X. And optionally, randomness R X. And it outputs a new commitment C X F. And if the randomness was provided, then it outputs new randomness R X F such that R X F is still short. And if C X is a commitment to X under R X under randomness R X, then C X F is a commitment to F of X under R X F. And so we're taking F to be from an arbitrary function family here, but concretely, you can imagine the function family being Boolean functions. Now, to generate the public parameters for our functional commitment scheme, we first generate a matrix A with trapdoor T and the uniformly random matrix C. We store the matrix C, the matrix A, and the trapdoor T as an extraction key. And so this will be kept by the online authority. And the public parameters are just C and A, so it doesn't include the trapdoor. The tag trapdoor techniques that we use allow us to map A to a unique AF for every F in the function family. And the key is that the trapdoor T will still allow us to pre-image sample with these AFs. And what's going to be the key for understanding this scheme, at least intuitively, is to think of C as being a superposition of commitments to all functions F in the family. So it's a commitment to functions, and it's sort of simultaneously all at once a commitment to every function in the family, albeit under different randomness in different public matrices. So now to describe the extract function, which is run by the online authority to generate the opening keys. What extract does is it uses the trapdoor T to generate a witness that says that C is a commitment to F. So I said that C can be viewed as a superposition of commitments to all functions F in the family. Extract gives us a witness that says that C is a commitment to this particular F. With respect to the public matrix A sub F, and the opening key is going to be the randomness R sub F. Now in order to commit, so again to think of remember we thought of C, C is a superposition of commitments to functions. So we're going to apply a particular function across all those possible states such that C sub M, which is going to be the output of commit, is a superposition of commitments to F of M for all F and F. So what it's the function that we need to apply, well it's going to be this function U sub M of F, where M here is hard coded into the function. U sub M takes in a function as input, namely F, and it outputs F of M. So C here is a superposition of commitments to functions. C sub M is a superposition of outputs, right? It's a superposition of commitments to outputs. Because we applied this function sort of across all those possible states. Now open looks very similar to commit, because in fact it does the same thing. And in fact the C M that's generated in commit and the C M that's generated in open are identical. But the difference is open takes in the message M, the function F, and the opening key R sub F. And the key here is that R sub F is a witness that says C is a commitment to F. Well, eval has this optional argument where we can pass randomness to it and get the new randomness out. And so we pass it a witness that says C is a function to F. And what we get is a witness that C sub M is a commitment to F of M. And then we verify, we do sort of exactly what I've described here. We accept if R sub M F is short and if it is indeed a C sub M is indeed a commitment to Y under the randomness R sub M F in the public matrix A sub F. This concludes my presentation. Thank you very much for listening. I hope you have a great day.