 Hi, this paper is on Dory, efficient transparent arguments which are listed in the product's and polynomial components. I'm generally the VP of R&D at now trying, but most of this work happened whilst I was a senior researcher at Microsoft Research and Development. So what's an efficient transparent argument? Well, now I can make some protocol between Approver P and Verify V, about some NP language L. The cool notion here is that the Prover has some, there's some public instance X and L and the Prover has some witness W for that instance. And the Prover wants to show Verify that they know that witness, possibly without revealing anything about it. The Verifier Ultimate is going to accept this proof if and only if the Prover does have such a witness, much lower some negligible error probabilities or the Prover being able to break some computational assumption. So what do we invert efficiency? Well, the opposite is that means the setup improving are both going to be linear in the size of the witness and the verification and proof size are both going to be logarithmic in size of the witness. In applications, you can think of the witness sizes being maybe 220, 230, so you really don't want to find yourself doing something quadratic, for example, in the size of the witness and it's certainly convenient for the Verifier if all the operations they have to do are very similar. The other thing we need is transparency. So again, transparency means that if there's any setup placed, that can be anything from selecting fields, selecting some groups, choosing a hashing element, whatever it might be, that can be done without embedding any secrets. So for example, it would be troubling if there was some particular element that you needed to know the log of in order to compute the setup, but if you knew the logarithm of that element that you could break in the task. So, and the other question is, what's a generalized inner product and what's a polynomial connect? For that, we're going to have to fix some field scales and we're going to introduce some pairing friendly curve. So we've got two curves, G1, G2, they're not exogenous, and some map G1 cross G2 to G2. So in this context, a generalized inner product argument is really about three vector spaces, UVW and some inner product, U cross V to W. So in the simplest case, you could think of U and V both being vectors in F and W just being a scalar from F, but you could also take, you know, U or V individually to be one of the two source groups, in which case you might have from something from, you know, something like a multi-expansion, would be a bilinear map from a vector of scale as an F and a vector of elements in G1 to a scale, to a single element in G1. Or you could even take U and V to be vectors of elements of G1 and G2 respectively and W could be an element of G2. The key point is, however you do this, you have some commitments to D1, D2, as the commitments to elements of UV and W, and you want to give some argument that you know some openings of those commitments, and that if you would stick the inner product between those openings, that you would get the results that you expect. So why would you care about a generalized inner product from a certain sort of intrinsic interest of being able to argue about it? Well, basically you can use these things to build polynomial elements. So for a polynomial element, let me fix some sequence of degrees. We can now have a commitment to some F, some possibly multivariate polynomial, and some scalar, and then the notion is that if we fix some public X1 through Xn, that you can make some argument that you can open a particular commitment to F, and I've committed some potential evaluation, you can prove that the F evaluated at X is that evaluation, and in the implicit you can check that F has a degree sequence banded by the D isom. Now, this is slightly from the traditional definition of a polynomial commitment in the sense of cutting, because we're committing to the evaluation rather than having that be in the clip. And we're allowing the general argument to take place, so maybe it's a interactive rather than non-interactive. So why would you care about either of these things? Well, fundamentally, if you go and you look at a bunch of these as a similar structure for building CKs and R, so it's a non-interactive item, it's knowledge for NP can be languages. This proceeds in a sequence of steps, so typically the non-interactivity isn't quite under fear, so many compile out some interactivity. But then there's some decomposition of many of these algorithms, many of these items into two phases. First, there's some polylogarithmic, purely information theoretic production from the full CKs and R for some generic language to specifically doing polynomial commitments. So this is using tricks like SunTrack, but other interesting techniques. And then separately, you then have to actually evaluate these polynomial elements under some commitments. You might have to send some commitments from the proof to the verify, and it's very natural. So we split those two things apart. Look at the reductions, one thing to study individually, the polynomial commitment is something in much of the literature, this is not done, and inside those two things are bundled together, which will make it a little challenging to compare Dory with some other words. And so what do we want of our polynomial commitment, but building efficiency besides? Well, it needs to be concrete, the infection needs to be fast, it's helpful, but it's transparent. That's a nice feature to have in your CKs and R. Also, you'll note that we have some O1 collection of polynomials that might be evaluated at some O1 collection points. It will be convenient if it sends out that you can take that batch of polynomials and some batch of points and maybe combine the evaluations in some efficient way. So given this background here, schematically, what are some previous works that have been in space? So we're going to compare us to three works of these schematically. So we have higher X, practical and supersonic, they're based on different assumptions. So higher X is just based on some curve with R dialog. So you can think of concretely curve 255.9. Freckle is based on B-cell and IOPs. So again, you need a hash function, but that's it. And supersonic is based on groups of unknown order. So these are an interesting kind of group where you somehow can't figure out their size. And as you can see, you've got some really interesting but actually, so higher X is concretely efficient for the prove it to, well, higher X is pretty efficient for the prove it to evaluate. But, you know, the commitment size and the evaluations are a little large. For Freckle, you know, across the board, things look to be pretty good, but the amount of communication required to do an evaluation is largely might be linked as is the complexity of the prove it to evaluate. And for something like supersonic, it's at least naively a little tricky. It looks like both the setup and the evaluation take longer than the ideal. Now, these are very essential persons. You know, really, we want to, again, understand this little concretely. That's challenging because it's momentarily dependent. But, you know, if we just fix a concrete security level and we get some concrete numbers that are, you know, no more significant than they are, they happen to be what my laptop spit out, then, you know, we can get some concrete numbers to attach to this. So benchmarking with the fastest code I could find, you know, we see that something like higher X to do an exponentiation in curve 255.9 takes maybe a few tens of microseconds. Doing a hashing operation of 64 bytes in using the hash functions used in the LibIOP implementation of practical takes a few tens of nanoseconds. Supersonic is a little more challenging because there isn't a concrete implementation of it. And there's been some substantial dispute in the literature about how large the parameters need to be for a particular concrete security level for these groups of unknown order, but taking the current as best I can tell, best estimates for how large these parameters need to be, you end up with something where, you know, you need a few hundred bytes for an element of GU and doing an exponentiation raising to an exponent of 128 bits. It's gonna take you a few tens of nanoseconds. So that sort of makes all of this a little more challenging, right? So for higher X was the concrete numbers are pretty good. And most of these steps are fine in practice, both the size of the commitments and the verify words to the evaluation, or to verify the evaluation proofs are a little larger than you might like. For practical, you know, there are these straight logs floating around that, you know, maybe would be fine if we really could say three words of magnitude concretely. And for Supersonic, it's very challenging to see how you can concretely deal with the various low pressures in these groups. Now, I mentioned that fractal would be or result in IOP is very great if you could really get away with using these concrete constants. But there's a small asterisk component, please, which is the worst for things like higher X or Supersonic, the expected sameness error of the underlying item that says very small, you know, exponentially small. The result in IOP is the best proved sameness errors are order one. And that means that you end up having to repeat the underlying arguments some large number of times. So again, the IOP implementation of fractal, if you ask it to give you 120 bits of security, we'll repeat this item to about 500 times. So that's starting to get substantial. It makes the apparent fast concrete constants not really quite so fast. Now, obviously this is a talk. So, you know, we've got some table with all prior work in it. So now naturally we introduce what this work did. So unlike the previous works, this is already requires a parent for an IOP. So concretely, all these numbers are gonna be about the BLS12, 3D1 implementation given in blasters. So that's what we get. You'll note that we get some linear time to commit. We get logarithmic evaluation and both the time taken to do the actual proven work too. To prove the evaluation or to do the setup of both the square root n, which is quite nice. This 192 bytes to get a serialized GT is a little challenging. You need to use a few tricks in order to serialize elements of GT that efficiently, but let's get prior work. So what is it? Well, fundamentally, it's a new generalized enterprise document. And so once you compile it down, you get a new component opening scheme as well. Conceptually, it is most similar to bullet proofs or the generalized IPAA of BMMTV19. And it's easiest to really present this from linear polynomials. That's not a fundamental restriction. The paper goes into the details of how you can handle arbitrary polynomial to do sequences. So as I said, we need a curve with a parent and the security assumption we're gonna use is SxDH. So what does that mean? It means Diffie-Helman in both source groups. And it means this C code DH is higher between Q1 and Q2. So if you give me essentially a scalar specified as the ratio of the logarithms of two elements in one group, and you give me an element in the second, I can't find that second element from the second group, raised to that scalar. So it's a very crude level. What are we actually gonna do inside Dory? Why is Dory in advance? Why couldn't you have done Dory five years ago? Well, so the key idea here is that we have the ability to take structure presented elements. And when we, you know, in a set, we sampled some polynomial and n, square root n elements, and we're gonna compute a logarithmic number of structure-preserving elements to this. And those are actually the only things that verifiers that we're gonna use. So they have this logarithmic collection of somehow structure commitments to some structure data. And the verifier is gonna use those in an interesting way. So basically these are not just, when we've done sampling, we've sampled some generators to some other committed schemes. We've then somehow generated commitments to those commitments. The reason why we do that is so the verifier can offload a bunch of computations that they would otherwise have to do themselves to the proofer. So in a bulletproof type scheme, you generate some challenges and then at the end, the verifier normally has to a linear time computation. Instead, we're gonna offload that to the proof. If you see the same naive way, you get some extra plain entries in every random bulletproof field. So maybe you would have with some like log square arguments. Instead of we, in some sense, do the natural thing, we pull these things together to keep the number of things constant. And that's how we get to a logarithmic verifier. So now let's go through that in a little more detail. To do that, we need to actually show you some preliminary expression. So first things first, we're not gonna worry about hiding. If you wanna see how we do that, just go into the paper. But fundamentally, we're just gonna get hiding. You just add random multiples of your favorite base point to everything. And then you need to do some, like very similar number of sigma proofs at the end just to prove that you could open things correctly. As a standard, we're gonna start with the public line, how to clarify statistical CK arguments and you can compile those to CKs and arguments there. Now, no sem is actually important. The way we're gonna get that thread is that it wouldn't extend to the emulation. So this is, again, for people who've seen bulletproofs, this is very much how it's done across the border. So the notion here is that you have some polynomial, your large tree, where each of the paths through this tree are labeled essentially with a transcript or a potential transcript of an argument that ends up convincing the verify. And when they, if you're given such a large tree, then you can somehow mechanically go through and extract the winners. Now, this is quite convenient because if you have witness extent emulation for a whole sequence of arguments and you chain them live from another, then it's very easy to combine those and say, well, if each of my individual rounds have witnessed some of the emulation that the whole thing does. And I mean, how are we gonna get this? Well, the key thing which we do again and again in all the proofs is that we have some boundary degree lower on polynomials in one or two variables. And from looking at some small tree of exception transcripts, we're able to argue that some polynomial of this form is zero for large number of places. And then in a completely unconditional way, if it's zero and under large number of distinct places, then it has to be zero uniformly. And this allows you to then take some interpolation, extract the underlying coefficients of this polynomial and use that to build witnesses on stage for the role. So, having sort of mentioned all these things as pulmonaryzoic, why can something like Dory work in the first place? So, what's our commitment scheme? It's gonna be basically a piece of sentiment or the generalization of pieces and commitments to working in a bilinear setting, which is the AFGHO, which is done by AFGHO, so AFGHO, so AFGHO et al. So, what's that look like? Well, you pick some generated gamma and some H. And if you wanna commit some vector V, then you take the inner product of V with the gamma, isn't you, add some random multiple of H. For, you know, written like this, this is just a piece of sentiment. If you substitute things incorrectly, you can also get other commitments out, you know, picking different, essentially different modules for where you're taking gamma and leaves from. So, one interesting thing about both these methods is their structure preserving. So, that means that, you know, if I have a commitment to a vector, however, I have two components, two vectors. Multiply them by scale as I can add them up. So, I have F linear operations on my alone modules. And particularly important for AFGHO et al. in particular is that there is a symmetry between the kind of thing that you're committing and the kind of thing that is used to generate that commitment. So, a set of generators that would use for a commitment to a vector of G1 elements is a vector of G2 elements and versus versa. And the other commitment is very small, it's all in one. So, what we can do and what we do do pretty routinely is that we find that we have some verified computation that needs to be done using the public generators for some of the components and some other public data. And instead of actually having the verified do that, which, you know, the generators might be quite large, instead of doing that, the verified and the provedish have already computed some commitment to those generators with respect to a second generation of generators. And we're gonna offload the entire computation onto the proof that the proof is gonna come in and give us some auxiliary argument tells us that if we were to have done this particular operation on this public data, then we would have got the right answer. This can be quite convenient for us because the commitment is small, the all the data is public, the proof I can actually just, you know, in principle at least do this. You can sort of see this as being kind of like computational commitments if you've read the Spartan paper on, which was one of the best ways to really sort of clearly give us delineation into a information that read parts in a computational polynomial. So, let's dig into public risk level just so we can have some sense for what we're doing, how it works. So, this is certainly like, how I would present all proofs if I had to. The notion here is that you really look at folding an inner product into an inner product of half the length. So, on the right, you see we have two vectors AB, we have their inner products, we have some extra stuff, some like alphas and alpha inverses multiplied by some seemingly arbitrary things. And on the left, we have some like shorter inner products. So here the notion is that AL and AR are the left and right halves of vector, BL and BR are also the left and right halves of vector B. And if you just take this thing on the left and multiply it out, you find that you get alpha times alpha inverse AR, BL and just AR, BR, that's up to AB. So, why is that useful to us? Well, it means that if we have some claim that we know what the inner product of A and B is, then the prover can actually send these nominal alpha and alpha inverses. Prover is ultimately gonna have to compute AR, BL and AR, BL. Then the verifier can sample alpha. Okay, you can now evaluate this right hand side, given the claim about what AB is and these claims about AR, BL and AL, BL and they end up with some claim about some inner product, half length factors. And if you sort of inspect this, there are no commitments here, but if you just sort of think about, given these factors, A prime, B prime for a variety of alpha and alpha, then you can interpolate and extract out what AL and BL, AR, BL, BR must have been and therefore construct the pending. And, you know, sometimes you check that inner products are what you're talking to me. So, where's this go? Well, you know, if you started with some factor of length two to the M, you could run this argument M times with some challenges and you would ultimately end up with some claim about two length one factors, scapegoats. And some, you know, claim for that or what the inner product is and that you can prove with a status of second proof. And when you sort of look at this, you know, you end up finding that there are these two vectors of scalars, which have been got by taking the chronicle products of a whole bunch of short vectors, you know, A1, A1, A2, 1 and A1 inverse 1, A2 inverse 1. And like the final claim that you'll end up having is that the final product, Yfin, is this bizarre product of an inner product of X plus with A and X minus with B. Now, why this, you know, this is, you know, so far, so it's sort of rejecting, like this is not really working with commitments around things. So suppose you wanted to, you know, somehow take this kind of argument such as it is and turn it into something that works with many factors. Well, you would fix some generators M or one and count them two for commitments to the X and B1 and B2. The proof would be plenty to know these factors. And following the same kind of holding procedure and just keeping track of what you know, you would end up reducing to some claim that the proof of those two particular elements, B1 prime, B2 prime, such that their product is one particular thing. And the product of B1 with X minus inner product of gamma two is something. X plus inner product of gamma one multiplied by B2 is something else. So these are sort of like the commitments we have to evaluate at the end. And then, you know, finally, what would the proof of do? Again, if you don't care about zero knowledge of hiding, the proof could just send you B1 prime, B2 prime. And the verifier has to then go and check these inner products of these public factors of X with these public factors, gamma one, gamma two. Now, negatively, this is still what I end for the verifier, which is a level drop link. In the context of polynomial, there are some tricks you can use to really split the polynomial evaluation into a matrix product. And this is what hyrax does. This basically means you only need to do some sort of square root n work, but you do actually have a sense on the sort of square root n components while we run. So let's sort of dig into this a little bit further. We have something here which is a logarithmic, but for the fact that right at the end, the verifier has to do this apparently linear time computation between some public factor of scalars and some public factor of generators. So how could we try and offload this? Well, we go back to thinking about what the structure of these scalars are. So we have this vector of scalars X plus, say, and that's been constructed as a primitive product like this. And so if we split gamma one into a left part and right part, then what you'll end up finding is that this inner product of X plus with gamma one is some inner product of a slightly shorter vector. X plus prime, say it's the product from 181 to AM1. With this alternate vector on the right, like alpha one gamma one L plus gamma one R. So this is a shorter inner product. And you might say, well, you know, so what? Like this is, it's a slightly shorter inner product, but you saw this all in the inner length, but you can offload this to prove it. So if the verifier knew some pre-computed commitments to gamma one L and gamma one R with respect to some arbitrary other generators, then you could combine those commitments and that would give you a commitment to this right hand part, this alpha one gamma one L plus gamma one R. And X plus isn't the answer, but you know, it turns out that that's fine because it's being constructed as a primitive product you only need to keep track of these logarithmic number of scales and you actually have to instantiate X. And it, you know, sort of as a corroborative that it turns out that you can take the inner product of vectors that are built up with chronicle products and short vectors in the logarithmic time. So what's this sort of telling you? It's telling you that if you tried to sort of offload this right hand inner product of X plus prime with alpha one gamma one L plus gamma one R to prove it, but you know, after some logarithmic amount of work to be able to half the length, then you'd have a half length generalized inner product to prove it again, try doing it again. And you know, to do this you need to be able to recurse but you can recurse because commitments elements that you want are given by factors in G2 and commitments to factors in G2 are given by factors in G1. So you just did this naively, you end up with some log square precise and verified compute which is good but all right. We can have a little further back. So, you know, we can combine these things. So again, just an inequality about inner products of vectors is that if you have four vectors ABCD, then if you combine A and C with some beta and some B and D with the beta, then you get some nonsense. But when you look at this, you see that ABC shows up as the beta independent term and CD shows up as the least square term. And then the only thing that's left is this beta term. So again, you know, you can, if the approval wants to claim that they know some ABCD with given inner product S and U, then you know, the proof can send this cross term, just some claim for it, the verify consamped beta uniformly at random and then require the approval show knowledge of vectors whose inner product is S plus beta T plus beta square Q. And again, like in any side of the same way, if the proof can do this for a whole bunch of beta, then you can interpolate out that they must have had these vectors and the verify can reconstruct those vectors. So this is interesting because it means that if I've got two inner products that want to compute, then there's some sort of two to one reduction. So if the proof has two columns, they can reduce them to more than some slight level. And you know, if the vectors are committed, then the verify is actually gonna have some commitments here. And again, I have to do, actually you're gonna have to have some commitments to the inner product of A and B and give you a commitment to the C and D and just do this linear combination as a portal would be fine. So what does this work like in practice? Well, if we take what would have previously been an inner product to commit to that system, we run one round of bullet proofs on it. So we've just done a simple, a simple halving of the lines. Then we have some U1, which is this combination from the generators, the gamma one and some U2, which is a combination of the generator, gamma two. The claim is that the verify, sorry, that should be the proof. The proof is no sort of C prime. So verify no C prime, D prime is D prime is some X such that you have these five columns. So essentially the C has been replaced by C prime because we have sort of half the length. And these, you want you to these folder generators. So in this case, verify is able to compute F1 and F2 because they know commitments of gamma one L, gamma one R, gamma two L, gamma two R and C prime, D1 prime, D2 prime come straight out of the bullet proof. So you've got five claims. You want to somehow reduce them. You can always do this in the most naive way that proof can make additional claims about the cross-terms. The verify can pre-compute some of these cross-terms if they're purely about public data. And then that means you can combine these claims about C prime, D1 prime, D2 prime and update all the limits as appropriate. This would give you a log size proof and corollary give you logarithmic graphite computation. The constants are horrible. They're like 20 or 30 operations per round. It's quite bad. So what is Dory? Dory is this, but like slightly pronounced so that we have like more concrete constants. So like very concretely, this will be only selected a lot of groups of revealments. What is the Dory reduction? So this is sort of the analog of the bullet proof reduction. So we start with proven knowing some V1 and G1M, V2 and G2M, such that they're inner products. So you can see the commitment to V1 is D1 and the commitment to V2 is D2. So the first thing that proven actually is gonna do is they're going to, first thing the proof is actually gonna do is they're going to send some commitments to the left and right halves of V1 and to the left and right halves of V2 with respect to some new all singing of that some generators. And the verify is going to sample some V2. This is formally doing exactly what V2 was doing a few slides ago. And the proof was going to update V1 and V2 by adding some multiples of gamma one and gamma two with the term V2 and plus. And then once that happened, the proof was gonna send a whole bunch of cross terms. So these new modified V1s and V2s, what's their cross product, cross term between the left plus and right plus and verify X and alpha. And now the proof does this full operation. So V1 get is replaced by alpha times this left half versus right half and vice versa with V2. And what does the verify have to do? Well, they have to do these slightly messy looking computations in order to update C prime, D1 prime, D2 prime. You'll notice that D1, L and R, D2, L and R that commitments that were sent in the first stage are really only being used to construct new commitments. The new cross term is constructed entirely from the old cross term, the old commitments D1, D2 and these cross terms C plus and C minus. Now this looks a little bit messy. Some sense it is. When you look at this in detail, you find that there are six GTs being sent to each round. And if you take this verified component at the end and do usual checks to defer it out, then there are only nine exponentiations per round. So at the end of this, there's some claim that the proven knows D1 prime, V2 prime, both of half the length they worked for, such that C prime, D prime and C prime is a commitment to the inner products and D1 and D2 prime so that commits to the half length factors with effects in the generators. So you'll notice this is exactly lining up with the initial stage of Dory reduce but with M replaced by M over two. So we can iterate this. And then once we get that to M is one, this is something which you have to do with that slightly irritating but essentially straight with signal protocol. So now that we sort of sketch this out, this is interesting. I mean, this is some sort of in a product argument in a sense, but it's not generalized in full that we're relying here on V1 and V2 being in G1, G2. And it's certainly not a problem. So in the interest of some time, we'll defer most of this to the paper. So what do we fundamentally do? Well, we add some public effects of scaleless S1 and S2. So it's basically just adding some more inner products between V1 and S2 and V2 and S1. And actually we're gonna prove a full segment of this one. So effects in G1 and G2 and a vector of scaleless and two vectors of scaleless just prove everything. The general vectors S1 and S2, there's very little way to do this in less than one year time because you actually have to read these as part of the statement. But the polynomial commitments still turn out that S1 and S2 again, have this very explicit product structure and are very specified by a logarithmic number of values. It could actually just directly construct a polynomial commitment from this to get a lot of time, but it is convenient to do it with two-tier trick in the same sort of way, the higher X or the one set I will do. So here the notion is that you place the evaluation of F at some point X with some products of effect from the left and effect from the right with some matrix M, where these left and right X are both the length of that square root N. That's kind of convenient. And again, what's convenient about this particularly for us is that this directly codes up as one inner product in G1, one inner product in G2 and across term, which means that we get to half the number of brands. And so in the same way that we're able to batch individual inner products, it turns out you can track that through the entire process so we can do batching at every possible size. This is particularly nice because it means that if we find a guess what polynomial commitment we have some batch of them, the marginal cost of doing one more for the verifier at least is just a one of the explanations and a lot of I think a number of field operations. For the proof assembly, it's a square root N for example, to do it, it's basically straight forward. No, so there are various other optimizations in this. First, as I mentioned, the argument of the public scale is actually giving the two generalized inner products at the same time. And that's convenient. It turns out that we have to do again one small inner signal protocol is to be able to transfer from a commitment to G2 through to something that's committing to the vector. But this is basically straight forward. Also, as I mentioned way back at the start, we do some optimizations for serialization of Gt for something like the BLS1231 curve Gt is naively naively as some element of Fp to 12. So this Gt would be 12 scalars to one code. Since that you can get that answer for scalars by just a little bit of work. And the formulae are actually pretty quite nice to generically essentially all of the standard pair and friendly families. It's also convenient because you have a fast doubling formulae which it's good for, I think recollection says it was good for 30 or 40 cents speedups on Paraphite. And then we do the usual tricks. You're going to batch all your parents to third order explanations, combine every Paraphite check with some independent scalars. On the proof of sight, we do a certain amount of work because able to enable the polynomial commitment in particular, this, sorry, for the, yeah, for the polynomial commitment in particular that the substance of the proof of computation is that they have to do many multi-exponentiations with respect to the same base points they have essentially something like N scalars and some vector of length root N and for each root N scalars in this thing, they need to multiply by the same factor. So what's the implementation of this look like? Well, we're based on blasters which is a fast thrust library for BLS-1231 curve internally that backs onto some CDAC end. The baseline that we compared to was the polynomial commitment six that's implicit in sparsum which is a optimised derivative of the scheme in hierachs based on the curve 255.9 Dalek which is a very fast implementation of curve 255.9. To do this, it was about 1600 lens of code blasters to do the torus-based serialisation GT and to completely reimplement the perpendicular implementation to re-support these very large multi-exponentiations and then it's about another 3,400 lens of code to implement it. So what's the performance like? Well, in a lot of ways, it's sort of what you would expect. So the prover, it turns out that the dominant cost is what are N operations in G1? Well, in BLS-1231 is a bit slower than curve 255.9 is but that's essentially a constant factor as we can see. The evaluation proof, the size of the evaluation proof similarly, it's logarithmic in both cases. The constants are a little bit worse worshiping a couple of elements of GT per round versus sparsum, you get to send only two elements of GT which again, it's all small because the scale is different. What things get really interesting however is the size of the commitments and the time it takes for the verifier to evaluate the proof. So the commitment size for us is constant whereas for something like hyrax it goes like square root n. And our verification is logarithmic for other than square root. So the crossover point here is about two to 22. It depends a little on concrete machine performance. And as you can see the proof evaluations are concretely slower across the board but actually it says that at least for large things to 22 to 24, this starts to disappear. Essentially the linear cost of just evaluating the polynomial once starts to become dominant and the cryptography kind of fits into the background. What's been pretty interesting for applications is how we behave under batching. So here this is just showing some data with a straightforward linear affair. So what are the concrete times? And essentially for a lot of apps sizes we find that it takes the prover well under a second for NSU to the 20 to generate the more proof. This is most of an order of magnitude better than it would be naively. The size of the proof similarly is reduced by about an order of magnitude. And for the verifier, again, we save quite substantial. So concretely, we end up pushing down close to one millisecond proof which is competitive with some, at least in this large batch context it's competitive with things like Roth which require much stronger assumptions. So in very true summary, like Dory seems to be an interesting view in a product type and obviously it can be integrated into other systems to make CKS notes. And that's essentially it. Thank you very much.