 Thank you, thank you for the, can you hear me well? Thank you for the introduction. So, yeah, today I'm going to talk about sub-vector commitments with application to succinct argument, and as already been mentioned, this is a joint work with Russell. So, let me start by giving the definition of a vector commitment. This is from the work of Catalano and Fiora. I've been formalized for the first time, but this is actually something that everybody's familiar with. So, a vector commitment allows one to compress a large database into a small digest. There is a commit argument that takes as input some randomness R and a vector V and produces a small commitment comm. And there is an opening argument that takes as input a position, a value of a certain position, the randomness used for the commitment and produces a short proof that that the value of position I was actually VI. And then the proof can be used by everyone to verify that this is actually the case. And the only property that we want from a vector commitment is position binding, which means essentially that you cannot prove that a certain position opens to two different values. And in this case, hiding is not really important because the commitment is compressing, so it just loses information and anyway it can be achieved through a generic transformation. And what we are interested in is the compression rate, which is throughout this work is going to just be perfect. So, just imagine the miracle tree. So, we can ignore that. And the proof size, which is going to be the focus of this talk. So, yeah, let me give you a few examples. And the first everybody's familiar with is the miracle tree. And the miracle tree is very nice because that's perfect compression rate. And you can give proofs of membership in with the size of lambda log n where n is the size of the database. But the recent work of Catalana and Fiora at the PQC 2015 showed how to shave off this additional log n factor by leveraging algebraic structure. And specifically they introduced two skins, one from CDH in bilinear groups and one from RSA, where they show a proof of size O of lambda, roughly. And the nice thing is that it has algebraic structure, which is useful in general, not only to shave off this additional factor, but unfortunately, both of the constructions that they presented, they have a trusted setup. So, and think about a trusted setup, like the RSA setup where you cannot reveal the primes, but you still have to sample a random RSA integer. And another, so is this the end of the story? No, this is actually there are still some limitation of both of these construction in the fact that what happens if you open to Q many indices? Well, the open is going to be linear in Q and this is really just a trivial approach. You just do Q many proofs. So the question is, can we do better? And the objective of my work with Russell was to remove the size of the, so remove the dependency on the size of Q and do that potentially without relying on trusted setup. We were not always successful, so just some constructions and then show where it makes a difference. So give some application of this, where this is particularly important. And so let me just then introduce the notion of sub-vector commitment. So sub-vector throughout this talk is just a subset of a vector. I don't know if you have a better name. We couldn't find any better. I'm hoping to suggest. So the commitment algorithm is the same and the only thing that changes is essentially that the opening does not take a single position anymore, but the set of positions. And of course you will have also to input a sub-vector and everybody can just verify that the sub-vector is indeed those values at that position I want to IQ. And the notion is essentially refinement of position binding, which says that for any sets of openings, which refers to arbitrary sets of positions, if there exists one position which is common among the two, it has to have the same value. So essentially we cannot give two different openings for a certain position, except that we have to take into account the additional sub-vector structure. And compactness is the crucial requirement of sub-vector commitment is the fact that the proof size is should be independent of Q, which is the size of the set of the indices that we give to the open. But now that we have this, now that this primitive, there is we actually learn that we can compute more things over the database why don't compute over the whole database, right? So in fact, what we do is we define an even generalized notion of sub-vector commitment, which we call linear map commitments, which allows one to open to an arbitrary linear map of computed over the commitment, sorry, computed over the database. If you think of the database as a vector in ZN for the sake of this talk, you can think of F expressed as a matrix and it characterizes a linear map and then we can really show that a linear map has a certain output value, which is W, which in this case is going to be a set of elements in ZNP. And the position binding and not to go into define it formally here it gets a bit tricky because you have several relation intuitively what we want is that if you think of the database as a set of unknowns, then each opening together with the proof defines a set of linear equations, right? So we want that it's hard to find the set of linear equation which does not have a solution, right? Even if you get many openings. Yeah, so that's that. Okay, what's the application of this? So our primarily focus was on sex-int arguments and let me briefly recap the definition of sex-int argument and the focal constructions and then we show how to use these tools in order to improve the set of the act. So this is one of the classical building blocking cryptography as a plenty of usage. Okay, so if L is an NP language and sex-int arguments is defined as an interactive protocol between a proven and a verifier, where a prover has a statement and additionally a witness for such a statement and in the end the verifier is either convinced or not of the membership of such a statement. And the proof, the soundness requirement is that if X is not in L, then the verifier should reject with probability greater than half and usually soundness is not always, but usually soundness is achieved by raw parallel repetition of the same protocol in parallel. And we additionally want sex-intness which states that the communication complexity of this protocol should be independent of the size of the witness. Okay, so in order to introduce to the construction I need to give you like a one-slide explanation of probabilistically checkable proof. So this is a beautiful information theoretic object. This has been a monumental work in complexity theory and it has been proved useful in cryptography exactly primarily for these applications. And I will be simplifying a lot. So if you fear complexity theory, please don't get picked. And so a probabilistically checkable proof allows one to encode a certain witness in the statement in such a way that if the verifier has an oracle access to such an encoding, it can verify the membership of such a, of the witness of the statement in such a language by just querying the oracle a constant amount of time. And for the sake of this talk, again, we require only probability of failure, smaller than a half. And again, negligible soundness is achieved by parallel repetitions. And this has seen a huge improvement in the last years and there has been many extensions also to, in order to improve the proof of efficiency because usually the cost of encoding, of computing this encoding is very large. So most of the literature is focused on improving the concrete numbers of this runtime. And specifically they allow for some relaxation of the model. For example, linear PCP allows us to query the database as a linear function. So you can actually compute now the function of the old database, but you can only do a constant amount of time. And then IOP stands for interactive oracle proof, which is essentially the interactive version of a PCP where the database is allowed to, sorry, the encoding is allowed to change in between queries. Okay, so how do we use this? So this is the classical, this construction follows the classical blueprint of helium, but instead uses a vector commitment or linear map commitment and you'll see why. So the first message of the prover is the encoding of the, it computes the encoding of the witness and then commits to it using a sub vector commitment. And it just sends this sub vector commitment to the verifier. Then the verifier picks a set of location, more generally a set of queries, sends them to the prover, and then the prover can compute the opening relative to that set of queries. And then the verifier just runs the, once he has the opening together with a proof, verifies the proof and check whether the PCP relation is verifies correctly. And this is most of the time made non-interactive using the fiascia mirror heuristics. So in the end, if we consider that the communication complexity of this protocol is really the size of the commitment plus the size of the openings, which is usually very tiny, and the size of the proof. And because you can always compute the query through the random work. And so what do we gain if we, if we instead of using a miracle tree, we instantiate the commitment uses a sub vector commitment or a linear map commitment. So the first thing that we gain is generality, in the sense that the algebraic structure of a linear map commitment allows us really to compute even linear functions and give proof, succeed proofs for that. So we can support a larger class of PCP, namely linear PCPs and an interactive or a good proof. And I think this is actually an important point because there is a lot of, this is a very active area of research. It's a very rich of new, new scheme and new techniques. And it's, this is essentially what it tells you is that the linear map commitment is sort of the only cryptographic object that you need in order to instantiate or to construct a protocol like this and you just have to come up. It's difficult, but you just need to construct your own linear PCP and then you're good to go. And in terms of succinctness, we allow, since we have this compactness requirement, we allow to save a factor Q, where Q in this case would be C times rho. And recall that rho was the number of parallel repetition needed to boost soundness and C is the constant which is the amount of query of the PCP encoding per round. And as an highlight of if we, if we really try to shoot for the best, the breadth proof size, this allows us to get proofs without a trusted setup. You still need a setup, but it's going to be public coin. So it's also called transparent in the literature. And the proofs are only of size, 5,000 bit for 128 bits of security. Should have three more minutes, right? Fantastic. And okay, this is, I should mention that this number are achieved by optimizing for proof size. So this is not going to be a practical protocol. It's just feasibility result that we can do it with so much information. And again, what do we lose? Unfortunately, sub vector commitment and linear map commitment uses public key operations. It's unclear whether this is inherent, but the proof of efficiency is obviously affected because instead of computing symmetric operation, you rely on public key crypto. Right. So let's go to the construction. So for the construction, we give two construction of sub vector commitments, one from groups of unknown order, and I'll spend a few words on that later. One from CDH in the linear groups, and we give one construction of linear map commitments in the generic by linear group model. Now, what are groups of unknown order? It's pretty self-explanatory. Are groups where the order is unknown. And you can think of the classical example. These are RSA groups. So you don't know the order of RSA groups unless you can factor the big composite. And there are other less known groups of unknown order. Among the most interesting in my opinion is the class groups of imaginary quadratic order. And why is this interesting? Because it allows to represent element in a succinct way. So they are short, they are relatively short group elements and it doesn't need a trusted setup. You still need a setup, but again, you can do this with public coin. And this is especially relevant in practice because what you do, you just hash it with a random oracle and those are your parameters. Right, so I just want to show here the construction for groups of unknown order. I'm not going to go into the details of this. It's mostly because Benedict will show you an improved construction. So, but this is just to show that this is a simple scheme. So this is nothing to be afraid of. Yeah, and on this note, I would like to conclude and pass the microphone to the next speaker. And slide deck please. Okay, thank you. So I will talk about further batching techniques for vector commitments and accumulators. And as a second motivating application next to these short proofs, we are going to think about blockchains. So if you think about what blockchains look like today, you can kind of think of them as databases, but there's different kinds of databases. So on the left we have Bitcoin here and on the right we have Ethereum. And in Bitcoin, the state of the database is really a set, an unordered set of coins. And each coin is an ID, a public key and an amount. And in Ethereum, it is a key value map between accounts and balances. And these states are already quite large, right? They're already on the order of gigabytes and every single minor, a full node needs to store this entire state. And why do they need to store it? Well, they need to store it in order to verify a new transaction that comes in. So even with the limited usage, these things are quite large. So the motivation is, can we get something like a low memory blockchain where instead of storing the state itself, the miner just stores a short commitment. So in the Bitcoin case, this would be in a commitment called to an unordered set. This is an accumulator. And in the Ethereum case, this is a vector commitment which we just heard about, which is a commitment to an ordered list, right? Notice the difference between the two. So this is much better because these commitments are very short. They can be less than a kilobyte. And then how do transactions look like? How do I then verify a transaction? Well, the transaction now would not just contain the transaction information, but also an additional proof. And this proof will state that whatever coin Alice is trying to send to Bob is in this accumulator of previously unspent coins. So it is a valid coin, basically. And with this proof, the miner or the verifyer can just say, oh, this looks good. I'm going to accept this transaction. So what do these commitments look like? Let's look at them a little bit more detail. So an accumulator is, as I said, a commitment to an unordered set, and it supports short inclusion and exclusion proof. So saying that something is in the set or not in the set. And the security property is that for no element, I can give you both an inclusion proof and an exclusion proof. And we can instantiate accumulators from several constructions, such as classic one like Mercotrees. And the one that I'll be focusing on today is mostly accumulators, RSA accumulators, which have the nice property that the inclusion and exclusion proofs are constant size. And vector commitments we already heard about, they're a commitment to an ordered list, so I can give you positional openings. And again, the security property is that I can't open to two elements at the same position. And again, we can instantiate these from Mercotrees and we just saw a construction in groups of unknown order or on RSA groups for these vector commitments. So, and again, these proofs are constant size. So the other thing that we already had a little bit motivation for, these batchable proofs, are also going to be very important in this application. Why is this the case? Well, think about it, if I have many transactions in a block and they all have separate proofs with respect to the same accumulator, what I would like to do is somehow aggregate these proofs, maybe a miner could do this, there's some public operations, some helper, and create one single short batch proof. And just, it's very easy to see that for Mercotree, for example, I couldn't really do this. If I take multiple Mercopaths together, then they're not really going to be shorter than the individual Mercoproofs. So we need something more algebraic. So the deserata for accumulators are, we already have a bunch of cool properties from prior work. We have constant size openings. They have a trusted setup, a constant size, a small common reference string. And what we add is the ability to give these batch inclusion and exclusion proofs. So short proofs for larger statements, which are also very efficient to verify. So I can verify them with only a constant number of group operations. And for vector commitments, the deserata looks similar, but we just saw that we already have beautiful construction for batch openings. But the problem with this construction for this particular application is that the common reference string is as long as the vector that I'm trying to commit to. So I'm not really able to save space if I need to store this gigantic common reference string. So our work, which is a very, which is a different construction, maintains the same properties from the old construction, but now adds also this property of having a constant size reference string. I can efficiently verify it. And it also allows the support for sparse vectors, which is a vector that is potentially of exponential size, but only has a polynomial number of non-zero elements. So for the rest of the talk, I will focus on how we achieve this one particular property, the constant size CRS. And as a roadmap for it, we're going to build a vector commitment with all of these nice properties, so the constant size CRS, from an accumulator, which already has the constant size CRS property. And if the accumulator also has batch openings, then we can get batch openings for the vector commitment as well. And in order to build these batch openings, we need pokey, which are proofs of knowledge of exponent or succinct arguments of discrete logarithms in groups of unknown order. So this is sort of the roadmap for the rest of the talk. So what does a vector commitment built from an accumulator look like? Well, it turns out to be a fairly simple construction, and the idea is that we're going to commit to a bit vector. So at every index, the vector is either zero or one, and we'll simply map each index, we hash it into the accumulator domain, and if the bit of the vector is set to one, then we'll add the hash of the index to the accumulator. So in order to open that, I now can either give you an inclusion proof if the bit is set, or an exclusion proof if the bit is not set, so if it's zero at the position. So this construction already has some nice properties because it inherits properties from the accumulator. So if the accumulator has a constant size, trust the CS, then so will the vector commitment. However, it also has several downsides, mainly that if I don't want to just open it to bits, but to a large set of elements, then I will need to give you many inclusion proofs and many exclusion proofs. In particular, it's not a sub-vector commitment yet. So what do we need to get this property? Well, what we would need is batch inclusion and batch exclusion proofs. So what do I mean by that? If we have one single inclusion proof, which can tell you that many bits are equal to one and one single constant size exclusion proof that many bits are equal to zero, well then I have a constant size opening for an arbitrary number of indices, and suddenly I have a sub-vector commitment with still the same great properties. So this is exactly what we'll achieve. So let's look at what our concrete accumulator construction or the accumulator construction that we build on looks like. Well, we are in a group of unknown order, which we already heard about today. So for example, in this group, we assume that taking roots of endlevance is hard. The domain is primes, and we initialize the accumulator with just a single random element. So then to commit to a set of element, we simply multiply all of these elements together, remember their primes, and then raise the base value G to that product. So giving inclusion proofs is fairly easy and we show in the paper how to give batch inclusion proofs as well, so let's focus on exclusion proofs. So I want to prove to you that Y is not in the set. So another way to say this is that Y has to be co-prime to the product of all of the elements in the set. So if this is the case, then I can find integers A and B, such that A times U, the products of the element in the set, plus B times Y is equal to one. And it turns out that A is actually bounded in size, it's gonna be less than Y, B might be much larger. So the exclusion proof is simply A and G to the B. And the verification then checks this equation here in the exponent. So it just checks that A times U plus B times Y is equal to one, and this is a proof that the element is not in the accumulator, that Y is not in the accumulated set. So now say I wanna create batch exclusion proof, I somehow wanna combine these exclusion proofs. So I have two of them, and it turns out we show in the paper you can combine them. And then the problem is that A prime prime here, the combined proof is actually now of the size of X times Y, so it grows. And if I do this repeatedly over and over again for multiple proofs, I combine them in some way, well then the size of this combined proof will grow and grow and grow, and it will actually be linear in the number of proofs that I've combined. So I don't really have a batch proof. It's not really smaller than just taking the individual proofs, at least not asymptotically. So what we really would like is somehow to compress these proofs. And our core idea is to not give you, give A to the verifier in the clear, but instead prove knowledge that such an A exists. Okay, right, I'm not gonna give you A, I'm just gonna prove to you that such an A exists. And maybe this proof can be short. So what we need is basically just a proof of knowledge of discrete logarithms in these groups of unknown order. So alpha here is an integer, maybe very, very large. And the goal here is not zero knowledge. I want to prove to you that V is equal to U to the alpha, I guess. It's not geo-knowledge, it's really mainly efficiency. So the verifier's work and the communication should ideally be constant size, independent of alpha. So we're going to develop some quick protocol, some fast protocol that achieves exactly this protocol, this property, so succinct argument of knowledge of this statement. So our, the protocol that we will build on is this beautiful proof of exponentiation which was presented at Eurocrypt this year. And what the protocol does, it lets the verify easily check that an exponentiation was done correctly using less work than doing the exponentiation himself. The important thing though is that the verify actually has input, has UV and alpha itself, so it could have done the exponentiation himself. It's just about saving work, but we will build on it to get a proof of knowledge later. So the protocol works by having the verifier send a random lambda bit prime to the prover and the prover is going to divide alpha by L and get a quotient and a remainder. Note that the remainder is small but the quotient may be very large on the order of alpha. The prover then sends over U to the Q and the verifier can compute the residue R equals alpha mod L himself and then check that this equation holds again in the exponent and it can do this very efficiently. So the problem here as I already mentioned is that the verify needs to know alpha. So how can we resolve this? Well, the idea is, our first approach is what if the prover sends the residue, computes the residue herself anyway, what if she just sends the residue to the verifier? And well, it turns out that this is in fact a proof of knowledge of exponent but it's not really a proof of knowledge of integer exponent. Alpha is not necessarily constrained to be an integer. So the, for example, if it knows that U is, for example, or V is the square root of U and it doesn't necessarily know an integer for which this is true, then it can still succeed in the protocol by simply computing R equals two inverse mod L. So this is slightly related to an impossibility result or this is actually very related to an impossibility result which says that no schnor-like protocol and this protocol faults in that category can achieve soundness error less than one half in these groups of unknown order. But if you look closely at this impossibility result, it assumes that there's no common reference string and we show that a very simple common reference string actually suffices to get exponential soundness and the way that we do that is by forcing the prover to commit to alpha using an element which is fixed in the CRS and then it is forced to use an integer alpha. So I'll skip over the proof idea which uses the Chinese remainder theorem to extract but using this proof we can now get a batch exclusion proof by simply giving you G to the B and a proof that this alpha is known and this proof is constant size. So from this batch inclusion proof we finally go back to what we originally wanted which is a vector commitment with constant size openings using both a batch inclusion proof for the elements where the vector is equal to one and a batch exclusion proof where the elements where the vector is equal to zero. So this final vector commitment has very nice properties and it's the first vector commitment which achieves all of these properties in parallel which is constant size openings such as the protocol that Julia presented but also fast verification and the constant size CRS and the support for these sparse vectors. The prover time is even larger than the work that Julia presented and of course significantly larger than the mercury so this is one downside. You can find the paper online and there's an implementation by the folks from Camry and Labs so you can also find that online. Thank you. Do you want to come? Both speakers are available for questions. The microphones are at the end of the aisle. Well, it seems that we have been both efficient and have achieved consensus. Let's thank the speakers again. Oh, sorry, I cannot see you behind this but go ahead. Great presentation, Benedict. I was wondering when you're going to commit to a vector of lots of elements, you need lots of primes and I was wondering if it's possible to speed up prime sampling if you know that you have to sample a bunch of primes? That is a good question. I think you can do some small pre-computation to see about small primes or small things that are divisible by small primes but I don't think it's something like a batch primality test which would be very useful where you can test many numbers for primes. I think that's a major open problem or that's an open problem and would be something really cool to have. And yeah, I think testing primes is actually important for both constructions. Any other questions? Well, in this case, let's do it. Thanks the speakers again.