 Okay, thank you for the introduction. So, right, so today I want to talk about lattice-based snags or sub-sync non-interactive arguments of knowledge with a bunch of nice properties. And this is the joint work with Martin, Valerio, Julio, and Aravind. So this is the agenda for this talk. The talk will consist of three parts. The first two parts are rather short. So there I will recall the notion of snags as well as vector commitments, because after all it's vector commitment day. So in particular, I will talk about vector commitments with functional openings. So for these two sessions, I will first recall what they are and what do we know about them in the literature and how do they relate to each other. And then in the third, more technical part of the talk, I will present to you how we can construct a lattice-based vector commitment scheme with openings to polynomial maps. Okay, so let us start with the first part that is the notion of snags. So a snag is defined with respect to some NP language L which is in turn defined by some relation R. And a snag for this language L is simply a tuple of three algorithms, the setup algorithm, the proof algorithm and the verify algorithm. And by using these three algorithms, a prover could prove to a verifier that certain statement is in the language. So to do this, there's the interfaces as follows. So first of all, there's the setup algorithm which generates some public parameters. Then given these public parameters, the prover could input the statement that it wants to prove as well as the corresponding witness. And then by running this proof algorithm, the prover would produce a proof. And in turn the verifier, when given the public parameters, the statement that the prover wants to prove as well as the proof given by the prover, the verifier runs this verification algorithm which is usually public. And afterwards the verifier could decide whether or not the statement is in the language. So the most basic property of a snag is that of completeness, which says that if the statement and the witness satisfies the relation, then the verifier should be convinced. And conversely we have the notion of knowledge soundness which is basically the converse of completeness. And this notion says that if there exists an algorithm A that is able to convince the verifier that a statement is in the language, then there should exist an efficient knowledge extractor which extracts from this algorithm A a witness corresponding to the same. So these two notions, these two properties are trivial to achieve because the prover could just send the witness to the verifier. And what makes snags a non-trivial notion is the property of succinctness which says that the size of this proof should be polylogarithmic in the size of the statement. Therefore the trivial solution of just sending the witness over a violate succinctness. And in the literature, usually when people talk about succinctness, they actually mean an even stronger property which I refer to as preprocessing. And preprocessing here means that there exists an additional algorithm, the preprocessing algorithm which allows the verifier to preprocess the statement that it anticipates a proof. And then afterwards the verification of proofs for this particular statement could be done much faster. So in concretely, it can be done in time also polylogarithmic in the statement size. So not only is the proof itself short, namely polylogarithmic in the statement size, but the time needed to verify it is also polylogarithmic in the statement size. Okay, so snag is the very powerful object. So let's see what do we currently know about them in the literature. So here let us limit ourselves to consider only snags for unstructured languages. So for example, so unstructured NP complete languages, for example, a circuit satisfiability or rank one constraint satisfiability or RNCS. And in particular, we are ruling out machine computations. And let us also focus on publicly verifiable snags. So for this category of constructions, here are all the schemes that I could gather from the literature. And we see that all of these constructions does not satisfy the following properties at the same time. So these properties are preprocessing, which I introduced in the previous slide. And the second one is algebraic and the third is post quantum security. And by algebraic here, I mean that the construction uses only algebraic operations defined over the mathematical structure that these schemes are constructed over. And I use the term post quantum security in a very liberal sense. That means I'm counting all constructions that are not trivially broken by quantum computers. So essentially all the schemes that are not based on groups. So why do we care about all these properties? The reason why we care about all these properties is because if we have a snag that is publicly verifiable, preprocessing and algebraic, as well as structure preserving, which is an even stronger property than algebraic, which means that the relations checked by the verification algorithm is supported by the snag itself. Then this such a snag would be very friendly to recursive composition. Namely we can use, we can prove knowledge of a snag proof using the snag itself. And this enables very powerful applications such as incremental verifiable computation. So if we consider a different category of constructions, namely lattice-based snags for unstructured empty complete languages, then of course, since these constructions are lattice-based, they are believed to be post quantum secure. However, in this parameters regime, we only know how to construct snags which are either publicly verifiable or preprocessing, but not both. Therefore, a natural question to ask is how do we construct snags which satisfy all of these properties at the same time? So post quantum security, publicly verifiable preprocessing and algebraic and structure preserving. And this is exactly the task or the objective of this talk and as well as this work. So somewhat surprisingly, the main and only ingredient that we need to construct such as snag is simply a vector commitment scheme for constant degree multiracial polynomials. And we are going to construct such a vector commitment scheme from lattices. So of course, since snags is a very powerful object and therefore these VC schemes are also very powerful, as you may have expected, this construction will not come for free. And in fact, we are going to use, introduce and use some new lattice-based knowledge and non-knowledge assumptions and I will give some justifications later. Okay, so in the following, I'm going to talk about in more detail about vector commitments with functional openings, especially because the terminologies that I used are a little bit different from the previous talks. Okay, so here is an introduction of vector commitments with functional openings. And this notion is also known as functional commitments in the literature. So the interface for this primitive is very similar to that of snags. First of all, there is a setup algorithm which generates some public parameters. Then the prover would commit to some vector X as a commitment. And then later, the prover could decide to open the commitment to some function F which is admissible by the VC scheme and produce some opening proof pi. And this opening proof pi can be interpreted as a proof for proving that the vector X committed in the commitment satisfies F of X equals Y for some claimed image Y. Now, the verifier when given this public parameters, the commitment, the function image tuple Fy as well as the opening proof could run this public verification algorithm and decide whether or not they believe that the vector X committed in the commitment satisfies F of X equals Y. So similar to the completeness property of snags here, we have the correctness property which says that if F of X indeed equals Y, then the verifier should be convinced. And in terms of security, we consider three different variants of binding. And the first one called weak binding is actually the somewhat standard notion in the literature. And usually it is called a binding but we distinguish it with the other notion called binding here by calling it weak binding. So weak binding says that it is infeasible to create opening proofs for two function image tuples F, Y and Fy prime for the same F but different images Y and Y prime. So we call this notion weak because this notion does not seem to be too useful when we consider non-linear functions F. And for these more complicated functions, it is arguably more meaningful to consider the binding notion which says that it is infeasible to create valid opening proofs for a bunch of different function image tuples. For example, F0, Y0, F1, Y1, F2, Y2 and so on. So that these tuples are inconsistent. And by inconsistent I mean that there does not exist any pre-image X that satisfies FI of X equals YI for all I's. And finally, the strongest notion of binding that we consider is called extractability which is similar to the knowledge soundness of snacks. And this notion says that if there exists an algorithm which can produce a valid opening proof for some function image tuple F, Y and then there exists a knowledge extractor which could extract a vector X which is on one hand committed in the commitment and on the other hand satisfies F of X equals Y. So in terms of efficiency, we again consider two notions, one stronger than the other. The first one is called, we call the first one succinctness which should not be confused with the succinctness of snacks. And here succinctness requires that the size of an opening proof as well as the commitment is polylogarithmic in the size of the committed vector X. But then these sizes are allowed to grow linearly in the size of the image Y. So this notion of succinctness can be seen as the relaxation of the succinctness notions considered for other VC schemes especially those constructed over groups because there usually one would require the sizes to be independent of the size of X. But here we are a little bit more liberal and allow a polylogarithmic dependency on the size of X. So naturally to upgrade this notion we can consider a stronger notion called compactness which says that these sizes are also polylogarithmic in the size of Y. And finally, similar to preprocessing snacks we can consider a preprocessing property for vector commitments which says that there exists an additional preprocessing algorithm which allows the verifier to preprocess this function image to pull FY so that later the online verification for this specific function image to pull can be done the verification can be done in time again polylogarithmic in both the size of X and Y. Okay, so again let's see what do we know in the literature about vector commitment schemes. So as we already saw from the previous talks especially from Dario's talks all of the existing constructions either supports openings to either position functions or multiple position functions or linear functions. And in particular, none of the construction satisfy support openings to any nonlinear functions or in the for example quadratic polynomials. And moreover, most of the constructions are over some sort of groups. So we have for example constructions over unknown other groups or pairing friendly groups. And these constructions since they are over groups they are not post quantum secure. There are some exceptions to the second point though and the notable one is that of a multiple tree. So one could view a multiple tree as a vector commitment for position functions. And here the vector X is committed by hashing it in a tree like fashion where the commitment is simply the root of the tree. So for this kind of simple position functions it is not too difficult to show that the free notions of binding are actually equivalent to each other. And furthermore, since these binding notions are based on the collision resistance of hash functions, it is widely believed that they can be instantiated in a post quantum secure way. And this post quantum security is also true for other exceptions such as those constructed over lattices. However, a common drawback of all these exceptions is that they only satisfy sub-signals but not compactness. So for example, for multiple tree to open to a single position, let's say position two we would need to review a root to leave path as well as all the siblings. And therefore, as the size of an opening proof is logarithmic in the size of the committed vector X. And if we want to open to multiple positions then we would need to review a root to leave path for each open position. And therefore the opening proof size grows linearly in the size of the number of open positions. And therefore these constructions do not satisfy compactness. So given this state of affair, it is natural to ask can we construct vector commitment schemes which are post quantum secure and compact for any kind of functions or can we construct vector commitment schemes that support openings to non-linear functions such as quadratic polynomials. So now suppose we are able to do both of this at the same time maybe not post quantum secure but let's say compactness and quadratic polynomials at the same time. Then it turns out that there is a very easy construction of snacks from such a VC scheme. So this construction is based on a very simple observation or a simple fact that the language of satisfiability of system of quadratic equations is already NP conflict. And here in this language, a statement is given by a function image to pull at Y where F is the quadratic polynomial map and Y is an image of this map. And the corresponding witness is some satisfying assignment X which satisfies F of X equals Y. So given this setting, it is almost immediate how we could construct a snack from a snack for this language from a VC for quadratic functions. And the construction is a very simple and is as follows. So the set of algorithm of the snack is exactly the set of algorithm for the VC and to produce a snack proof, the prover first commits to the satisfying assignment X into a commitment and then immediately open this commitment to the polynomial map F and produce an opening proof pi. So the snack proof simply consists of the commitment as well as the VC opening proof pi. Now, if the VC scheme is pre-processing, then of course the verifier can pre-process the statement or the function image to pull at Y into a statement specific public parameters. And then regardless of whether the scheme is pre-processing or not, the snack verification is the same as the VC verification where the verifier checks where the F of X is equal to Y by verifying this proof. So as we can clearly see, the properties of VCs are exactly one-to-one correspondent to the properties of snacks. Namely, the correctness of VC implies completeness of snack and extractability implies knowledge standards, compactness implies sexiness and pre-processing of course implies pre-processing. So by now, hopefully you are convinced that the VC for quadratic functions or in general nonlinear functions is a very powerful object and therefore it is an important open problem to construct them. And this brings me into the third and technical part of the talk where I present to you our construction of a lattice space VC with openings to any constant degree polynomial maps. Hey, could you say a little bit more precisely what you mean by polynomial maps? Right, so a polynomial map is simply a bunch of polynomials evaluated on the same input. Okay, that's it. Okay, thanks. Yeah, so it's like a linear map but it's allowed to do nonlinear things. Okay, so let us get back to our construction and our construction will consist of four parts or four steps. So in the first step, we are going to translate a pairing-based VC scheme for linear functions into a lattice space. And at the same time, we are going to translate a pairing-based computational assumption where the weak binding property of this pairing-based VC is based on into a new lattice-based assumption which actually belongs to a family of assumptions which we call the crisis assumptions. And the weak binding of this lattice-based VC scheme is going to rely on one of these crisis assumptions. And note that after this translation step, we can support linear functions but then the scheme is only weak binding as only succinct but not compact. Okay, so in the next few steps we are going to gradually upgrade all these properties to what we want at the end. Okay, so in particular, in the second step we are going to explore the ring structure in lattices to get a VC scheme for polynomial openings. And then we are going to introduce a knowledge version of the crisis assumption and this allows us to upgrade the weak binding property into the extractability property. And then finally, I'm going to introduce some tricks for some new tricks for aggregating opening proofs which combined with the knowledge assumption allows us to upgrade succinctness to compactness. So we've offered a little let me proceed with the first step of translating a pairing-based VC scheme to a lattice-based scheme. And to do this, I must first recall some background of lattice-based and pairing-based cryptography. So in pairing-based cryptography we are given three cyclic groups G1, G2 and GT. G1 and G2 are called the source groups and GT is called the target group. And we assume that each of these groups is of prime order Q and these groups are equipped with a pairing operation which I will talk about below. So we will adopt the implicit notation of group elements namely this bracket notation. So that the generator of group I for example is written as bracket one subscript I. And given this notation, we can write the group operation within each group additively. So for example, if we are given A in group I and B in group I, we can add them together and get A plus B in group I. Then we can write the pairing operation multiplicatively. So specifically if we're given an element A in group I and an element B in group II, then we can multiply them together and get A times B in the target group. So an advantage of this notation is that it is easily extendable to a matrix vector operations. So for example, we can express this inner product operation between a group I vector A and a group II vector B and so that their inner product is simply the inner product AB, but this time in the target group II. Moving on to the lattice setting, we are now given a ring instead of a group and this ring is equipped with a norm function. So we can talk about whether a ring element is long or short. And one could easily extend this norm function to talk about the norms of vectors of ring elements as well. Then we are also given a prime modulus Q so that we can define a quotient ring which is defined by the ring R divided by the ideal generated by Q. And finally, we are given some public matrix A and public vector T. This public matrix A is uniformly random and it has a wide shape and the vector T which we call the target vector is also uniformly random and it is as tall as A. So next let me give you a general blueprint which is implicit in the literature of how we can construct a pairing base we see for linear functions. So to begin, let us fix a random vector or point V which has the same dimension as the vector X that we want to commit to. So in the following, we are going to treat the entries of V as variables and define some monomials and in general polynomials over them. And these polynomials are going to be Lorent polynomials meaning that we allow both negative and positive powers and zero power also, of course. So the first polynomial that we are going to define is actually a monomial called the target monomial which we denote by V bar. And for example, we can set V bar to be the product of all the entries of V i. And next we will define for each i, for each entry of V a complement monomial denoted by V i bar and these complement monomials are set up so that when we multiply V i bar with V i together we get the target monomial back for all i. So next I will describe how we can, so next we want to encode the vector X that we want to commit to as a polynomial in V with coefficients somehow dependent on the vector X. And similarly we will encode our linear function F as a polynomial in V and with coefficients again given by F. And these four polynomials and monomials should be constructed in such a way that when we multiply the encoding of F to the encoding of X we should obtain a polynomial where the value F of X is where the value F of X appears as the coefficient of the target monomial V bar. And all the other terms should be outside of the linear span of the target monomial. So this is a little bit abstract therefore let me give you a concrete example which we will use for the rest of the talk. So for example we can encode our function F in this way namely as a sum of these complement monomials with coefficients given by F i. And we can encode X as the following sum which is Xj times Vj. And we notice that if we define these two encodings in this way then when we multiply them together we indeed get F of X as a coefficient for the target monomial. And all the other cross terms are of the following form. So they are a summation of these cross terms V i bar times Vj with coefficients given by F i times Xj with all i not equal to j. And what's important about this time is that none of them is in the linear span of V bar. So given this general blueprint we can obtain a generic construction of pairing-based VCs for linear functions and the construction is as follows. First let me tell you how we can commit to the vector X. So to commit to the vector X we simply compute the encoding of X but in group one. And in our example we are computing the encoding like this and to allow the prover to do this we need to give away these group one encodings of Vj in the public parameters. Now let us skip the opening algorithm for the moment and see how the verification should be done. So as the first step the verifier would encode would compute the encoding of F but in group two. And for our example we need to give away these group two encodings of the complement monomials in the public parameters so that verifier can do this. And next the verifier would compute a value which I call delta in the target group which is computed as the product between the encoding of F in group two and the encoding of X in group one which by the way is given by the prover as the commitment. And the verifier will subtract from this product the image Y times the target monomial V bar in the target group. So this was special about this value delta is that if F of X indeed equals Y then the delta would be outside of the linear span of the target monomial V bar. So now suppose the prover gives us an opening proof which we pass as a group two element then we can check this pairing equation to decide whether or not we believe we trust the prover or not. Okay, so given this verification equation we can reverse engineer what the prover should do and namely the prover would compute this group element two U two U in group two as a linear combination of these VI bar times VJ in group two with coefficients given by FI times XJ for or I not equal to J. And again, we need to give away all these elements in group two in the public parameters so that the prover could do this. So now given this scheme let me show you some translation rules which allows us to translate this scheme into the lattice setting. So first of all, we are going to deal with the public parameters. And then in the next slide we are going to deal with the rest of the algorithms. So we recall that the public parameters of the pairing based scheme looks like this. And to turn this into a lattice based public parameters we are going to do three steps. So as the first step we are going to change this setup of Can I ask something Russel? Is this quadratic size in the public parameters? Yes. Okay, okay, thanks. Yeah, sorry to disappoint. Okay, but so right. So as the first step we are going to translate this group setup into a lattice setup. Namely we turn this generators of group one and group two into our public matrix A and public vector T. Then we are simply going to make these elements VJ public. So unlike in the group setting where these VJs are hidden in the exponents we are just going to make them public. And therefore consequently this V bar I as well as the target monomial V bar are also public. But then we are going to do something different with this cross terms in group two. And for each VI bar times VJ in group two we are going to give away some short pre-images which is a vector Uij so that when we multiply A to it we get V bar I times VJ times the target vector T modulo Q. So after applying all these transformations we obtain the following public parameters for the lattice-based scheme. Then let me show you how we can deal with the rest of the algorithms. So as you may have expected this scheme is going to be very similar to the pairing-based scheme but anyway I'm going to explain the construction again step by step. So okay, so the commitment is again computed in encoding of X but this time computed modulo Q and again these VJ values are given in the public parameter so the prover can do this. And let's again skip the opening algorithm and look at the verification algorithm where the verifier first computes this encoding of F as follows and again modulo Q and then compute the value of delta which is the product between the encoding of F and then coding of X which is the commitment and subtract from it the image Y times the target monomy V bar again modulo Q and identical to the pairing setting is that if F of X equals Y then the value of delta is outside of the linear span of V bar. So now suppose the prover gives us an opening proof which we this time and interpret as a short factor U we can the verifier checks whether this short factor U satisfies A times U equals to delta times T modulo Q and whether U is indeed short. So given this verification equation we can reverse engineer the opening algorithm where the vector U is simply a linear combination of these Uij hint factors given in the public parameters with coefficients given by fi times Xj for i not equal to j. So now that we have translated the scheme let us also translate the underlying hardness assumption which the weak binding property is based on. So as you may have guessed the pairing based assumption looks something like this. So it says that given the public parameters it is difficult to find the group two encoding of the target monomial V. And if we apply the same set of translation rules we will obtain a lattice based assumption which looks as follows. So it says that given the lattice based public parameters it is difficult to find a short pre-image U of again the target monomial V bar times the target vector T. But so here we need to strengthen the assumption a little bit and say that it is also hard for the adversary to come up with a short pre-image of a small multiple of the target vector V instead of exactly. So in fact, this assumption this particular assumption belongs to a bigger family of assumptions which we call the k-ring in homogeneous short integer solution assumption or crisis assumption family for short. And since all these assumptions are near let me give you some intuition of why we believe that they are plausible. So in particular, let me convince you why this particular assumption which our scheme is based on seems plausible. Okay, so first we notice that without these hints given in the public parameters this problem is essentially the same as the ring short integer solution problem or the rings is problem. And this is believed to be hard and it's a standard assumption in lattice based curriculum. So therefore the way to break this assumption is to somehow use this hint vectors U, I, J. And seemingly the only way to use these hints is to perform a short linear combination of these hint vectors. However, by construction we notice that the target monomial v bar is not in the linear span of these images in the hint. And therefore the hints don't seem to be very useful. Okay, so now that we have obtained a lattice space VC for linear functions, let us see how we can upgrade it to support openings to polynomials. This step is actually very easy. So, and it's based on the following observation. So we look at the verification equation for the pairing base scheme. And we notice that the value delta there in the target group consists of evaluating the function f over a bunch of target group elements. And since we only know how to perform linear operations on target group elements, this forces f to be linear. However, when we perform the translation to the lattice setting, these target group elements become elements over the ring RQ, where we can perform both addition and multiplication. And therefore nothing stops us from evaluating a higher degree polynomial over these ring elements. And in particular, we can evaluate multirariate polynomials, quadratic polynomials, for example. Okay, so then as the first step, let us see how we can upgrade weak binding to extractability. So to, before we do that, let me first show you how we can obtain the weak binding property from the respective assumptions in the pairing setting and in the lattice setting. Then I will show you how we can upgrade the weak binding property to extractability. So in the pairing setting, suppose there exists an anniversary which gives us a commitment and a function f and as well as two valid opening proofs for two images, y and y prime. Then by writing down the verification equations that these proof satisfy and then subtracting them, we are able to express the target monomial in group two as follows. And this expression is well-defined because y is not equal to y prime. And therefore we can therefore this value one over y prime minus y is well-defined. So what we just did is that we use an adversary against weak binding to produce the group two encoding of v bar. However, we just assume that this is hard to do by the pairing based assumption and therefore we conclude that this adversary is unlikely to exist. In the lattice setting, the reasoning is very similar except that we need to further restrict our function f and the vector x that we commit to. So to be concrete, we want f to be of constant degree and in the expanded form, we want the coefficients of f to be short. And at the same time, we also want our vector x that is committed to also be short. So given this restriction, the reasoning is exactly the same. So suppose an adversary gives us a commitment, a function and some opening proofs for y and y prime. Again, we can write down the verification relations and subtract them so that we obtain the following relation. Now, since these opening proofs when you prime are supposed to be short, so the difference is also short. And by our restrictions on f and x, we can conclude that these images y and y prime are also short because they are supposedly equal to f of x. And therefore the difference between them is also short. But what we just did now is that we find a short preimage of a small multiple of v bar times t. And by the crisis assumption, we assume that this is hard to do. And therefore we conclude that such an adversary should not exist. So then to upgrade this weak binding notion to extractability, the first question that we ask ourselves is that can we do this in a black box way? And so in fact, for linear functions, we can indeed show that pretty easily, show that using linear algebra, the free notions of bindings are actually equivalent. However, for non-linear functions, especially for polynomials of degree at least two, there is actually a black box separation. So this black box separation is due to, ultimately due to the impossibility results by Gentry and Wicks, who show that it is impossible to reduce any falsifiable assumption in a black box manner to the adaptive soundness of a schnack. Here a schnack with the G is similar to a schnack, but except that there is no knowledge extracted. So the schnack only satisfies soundness but not knowledge sounds. And on the other hand, it is quite trivial to see that if we have a VC scheme for quadratic polynomials, which is also binding, then we immediately get an adaptively sound schnack for quadratic equations. And just now in the previous slides, we show that from a falsifiable assumption, namely the crisis assumption, we are able to get weak binding for such a VC scheme. And therefore combining all these three implications together, we conclude that there is no black box way to reduce weak binding to bind. And therefore to conclude, we need a new lattice-based knowledge assumption in order to prove binding, not to mention extractability. And this lattice-based knowledge assumption is the following. So here we consider an algorithm A, which is given some short pre-images Ui of some images Vi times T. And suppose this algorithm after giving this hints is able to produce a short pre-image U of C times T of some ring element C. Then this assumption says that there exists an efficient extractor, knowledge extractor, which can extract from this algorithm an expression of this element C as a short linear combination of these elements Vi with coefficients given by some short Xi's. So again, since this is a new assumption, let me give you some intuition of why this might be plausible. So in order to break this assumption, basically what we need to do is to construct an algorithm A, which is able to find these pre-images U without using the hint vectors Ui. And one way of doing so is to simply sample this pre-image U randomly, let's say from a Gaussian distribution. However, if we do this, the distribution of A times U would be close to uniform. And in particular, A times U would be very likely outside of the linear span of T. And therefore it won't be equal to C times T for some C. And therefore it seems that the only way to produce such a short pre-image is to perform a short linear combination. And if the adversary simply does this, then we can intuitively argue that an extractor exists. So now that we are equipped with this knowledge crisis of knowledge assumption, let us see how we can achieve extractability. So let us recall that the commitment is simply a linear combination of these monomials Vi with coefficients given by the coefficients of X, Xi. So we are going to perform the following modification. We simply modify the scheme so that an opening proof also consists of another short pre-image U prime of the commitment times the type vector T. And we argue that with this modification, the scheme is already extractable. So here's the proof sketch. So suppose we have an adversary that is able to produce a commitment, a function image tuple, as well as an opening proof, which now consists of an opening U and a, sorry, and a pre-image U and also a new pre-image U prime, which we introduced above. Then using the crisis of knowledge assumption, we are able to extract a short linear combination of these Vi's to express the commitment C. And next, we simply argue that this extractor vector X star will satisfy F of X star equals Y with very high probability. So suppose this is not the case, then let's say F of X star is equal to Y prime, which is not equal to Y. Then we can run the opening algorithm honestly on F to produce an opening proof for FY prime. But what we just did is that first of all, we get an opening proof for FY from the adversary. And on the other hand, we are able to produce a valid opening proof for FY prime. And this means that we are able to break weak binding. However, we just show that by the crisis assumption, our scheme is weak binding. And so therefore this seems infeasible and we conclude that F of X star should be equal to Y. So for the interest of time, let me quickly go through how we can get a compactness. So let me simply skip this slide. And to get compactness, we essentially program a ring-sys instance in the public parameters where this ring-sys instance is given by a vector H. And we want that this ring-sys instance has a modulus P, which is much, much smaller than the modulus Q of the VC scheme. And the idea is that we will use this ring-sys instance for the coefficients that we need to perform a linear combination over a bunch of different opening proofs so that we can aggregate these opening proofs into a single. So more concretely, to prove that FI of X equals YI for all I, the prover would instead prove this compressed relation, which says that the summation of XIFI evaluated at X is equal to the summation of XI, YI. So to give you a sense of why this is secure, let me try to reduce the extractability of the single function scheme to the extractability of this multi-function scheme. So by the extractability of the single function scheme, the extractor is able to extract some pre-image X, which satisfies this compressed relation that is summation of XIFI evaluated at X is equal to the summation of XI, YI. And if we move these terms around, we obtain the following relation. So suppose now FI of X is not equal to YI for some I, then we can conclude that this vector whose entries consist of FI of X minus YI is actually a short non-zero solution to the rings instance action. And since we believe that rings is hard, therefore we are convinced that this vector should be a zero vector and therefore FI of X is equal to YI for all I. So I think I'm running out of time and I would simply conclude my talk here. So I'm happy to take any questions. There was a question in the chat. The question was followed. If I understood correctly, binding is equivalent to extractability for linear algebra relationships. If so, can you elaborate on that? Right, yeah, that is true, especially over when the linear functions are defined over a finite few. And the reason is simply linear algebra. So you can, if there is an, if you can collect a bunch of opening proofs for some inconsistent function image tuple and then you can perform Gaussian elimination so that you can get two linear functions evaluated on the same X with two different images. And yeah, with that you break weak binding. But okay, but the question is about the relation between binding and extractability. Okay, so for extractability is very simple. So you simply solve the system of linear equation if it is consistent. And by solving the system of linear equation, you are able to extract the pre-image. I was asking if, if you can say something about the size of the opening proofs as a function of the vector length. Right, so the size of an opening proof is going to be polylogarithmic in both the committed vector as well as the number of functions that you open to. I have a question which is a little more high level and it's about the knowledge assumption, right? So, yes, your assumption and make it or break it. I mean, that's great. But do we have any like work, the look and knowledge assumption in the quantum world? Like what do we know about making this kind of assumptions? I'm not an expert, so that was the first question that came up in my mind was what do we know? Like this idea that the only possible algorithm is the algorithm that uses this particular secret that we extract, right? Is that what kind of, what validates an assumption when we're looking about quantum algorithm? So as far as I know, there seems to be no literature which studies knowledge assumptions specifically in the quantum setting. However, there did exist a knowledge assumption in the lattice, I think that is the assumption that certain lattice based encryption scheme is so-called linear only. So you can perform only linear operation on them. And this is a knowledge assumption. Right. And since it's lattice based, you could believe that it holds also against quantum computers but I'm not aware of any work that studies this specifically. I think I have a paper with that assumption. I am familiar with that one. Yeah, yeah, yeah, but I was like more in general, the relationship, sort of we know what the relationship between the random or a call and the quantum world is in, I don't think anybody has looked as you're saying about the relationship in general between knowledge assumptions and the quantum world. Right, right. So I don't see, that's your parameter, sorry, that's your paper set concrete parameters. So the concrete parameters, yes, we do consider some concrete parameters. And these concrete parameters are based on best attacks. So we simply say that it seems that the best attack against these new assumptions is to just solve the cis instance in an old school way. And therefore we use the best attacks against cis to set our parameters. And so what I can say at the moment is that asymptotically they look great, but concretely speaking, they are like some orders of magnitude higher than competitors, but not too high. So we think that this is a viable approach. So...